00:00:00.001 Started by upstream project "autotest-per-patch" build number 121026 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.019 The recommended git tool is: git 00:00:00.019 using credential 00000000-0000-0000-0000-000000000002 00:00:00.020 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.035 Fetching changes from the remote Git repository 00:00:00.037 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.061 Using shallow fetch with depth 1 00:00:00.061 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.061 > git --version # timeout=10 00:00:00.093 > git --version # 'git version 2.39.2' 00:00:00.093 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.094 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.094 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.781 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.793 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.805 Checking out Revision 6e1fadd1eee50389429f9abb33dde5face8ca717 (FETCH_HEAD) 00:00:02.805 > git config core.sparsecheckout # timeout=10 00:00:02.815 > git read-tree -mu HEAD # timeout=10 00:00:02.831 > git checkout -f 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=5 00:00:02.847 Commit message: "pool: attach build logs for failed merge builds" 00:00:02.847 > git rev-list --no-walk 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=10 00:00:02.943 [Pipeline] Start of Pipeline 00:00:02.957 [Pipeline] library 00:00:02.959 Loading library shm_lib@master 00:00:02.959 Library shm_lib@master is cached. Copying from home. 00:00:02.974 [Pipeline] node 00:00:02.980 Running on FCP11 in /var/jenkins/workspace/dsa-phy-autotest 00:00:02.985 [Pipeline] { 00:00:02.994 [Pipeline] catchError 00:00:02.995 [Pipeline] { 00:00:03.007 [Pipeline] wrap 00:00:03.015 [Pipeline] { 00:00:03.023 [Pipeline] stage 00:00:03.025 [Pipeline] { (Prologue) 00:00:03.192 [Pipeline] sh 00:00:03.479 + logger -p user.info -t JENKINS-CI 00:00:03.495 [Pipeline] echo 00:00:03.496 Node: FCP11 00:00:03.502 [Pipeline] sh 00:00:03.797 [Pipeline] setCustomBuildProperty 00:00:03.808 [Pipeline] echo 00:00:03.809 Cleanup processes 00:00:03.813 [Pipeline] sh 00:00:04.093 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:04.093 884871 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:04.104 [Pipeline] sh 00:00:04.382 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:04.382 ++ grep -v 'sudo pgrep' 00:00:04.382 ++ awk '{print $1}' 00:00:04.382 + sudo kill -9 00:00:04.382 + true 00:00:04.402 [Pipeline] cleanWs 00:00:04.414 [WS-CLEANUP] Deleting project workspace... 00:00:04.414 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.421 [WS-CLEANUP] done 00:00:04.425 [Pipeline] setCustomBuildProperty 00:00:04.437 [Pipeline] sh 00:00:04.715 + sudo git config --global --replace-all safe.directory '*' 00:00:04.769 [Pipeline] nodesByLabel 00:00:04.770 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.779 [Pipeline] httpRequest 00:00:04.785 HttpMethod: GET 00:00:04.785 URL: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:04.788 Sending request to url: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:04.790 Response Code: HTTP/1.1 200 OK 00:00:04.791 Success: Status code 200 is in the accepted range: 200,404 00:00:04.791 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:05.812 [Pipeline] sh 00:00:06.094 + tar --no-same-owner -xf jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:06.111 [Pipeline] httpRequest 00:00:06.117 HttpMethod: GET 00:00:06.117 URL: http://10.211.164.96/packages/spdk_ea150257daeafcf9aa3bca443207227fe85667c5.tar.gz 00:00:06.120 Sending request to url: http://10.211.164.96/packages/spdk_ea150257daeafcf9aa3bca443207227fe85667c5.tar.gz 00:00:06.143 Response Code: HTTP/1.1 200 OK 00:00:06.144 Success: Status code 200 is in the accepted range: 200,404 00:00:06.144 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/spdk_ea150257daeafcf9aa3bca443207227fe85667c5.tar.gz 00:01:08.247 [Pipeline] sh 00:01:08.529 + tar --no-same-owner -xf spdk_ea150257daeafcf9aa3bca443207227fe85667c5.tar.gz 00:01:11.093 [Pipeline] sh 00:01:11.376 + git -C spdk log --oneline -n5 00:01:11.376 ea150257d nvmf/rpc: fix input validation for nvmf_subsystem_add_listener 00:01:11.376 dd57ed3e8 sma: add listener check on vfio device creation 00:01:11.376 d36d2b7e8 doc: mark adrfam as optional 00:01:11.376 129e6ba3b test/nvmf: add missing remove listener discovery 00:01:11.376 38dca48f0 libvfio-user: update submodule to point to `spdk` branch 00:01:11.390 [Pipeline] } 00:01:11.413 [Pipeline] // stage 00:01:11.423 [Pipeline] stage 00:01:11.425 [Pipeline] { (Prepare) 00:01:11.445 [Pipeline] writeFile 00:01:11.460 [Pipeline] sh 00:01:11.743 + logger -p user.info -t JENKINS-CI 00:01:11.756 [Pipeline] sh 00:01:12.036 + logger -p user.info -t JENKINS-CI 00:01:12.050 [Pipeline] sh 00:01:12.335 + cat autorun-spdk.conf 00:01:12.335 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.335 SPDK_TEST_ACCEL_DSA=1 00:01:12.335 SPDK_TEST_ACCEL_IAA=1 00:01:12.335 SPDK_TEST_NVMF=1 00:01:12.335 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.335 SPDK_RUN_ASAN=1 00:01:12.335 SPDK_RUN_UBSAN=1 00:01:12.343 RUN_NIGHTLY=0 00:01:12.348 [Pipeline] readFile 00:01:12.373 [Pipeline] withEnv 00:01:12.375 [Pipeline] { 00:01:12.389 [Pipeline] sh 00:01:12.674 + set -ex 00:01:12.674 + [[ -f /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf ]] 00:01:12.674 + source /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:01:12.674 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.674 ++ SPDK_TEST_ACCEL_DSA=1 00:01:12.674 ++ SPDK_TEST_ACCEL_IAA=1 00:01:12.674 ++ SPDK_TEST_NVMF=1 00:01:12.674 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.674 ++ SPDK_RUN_ASAN=1 00:01:12.674 ++ SPDK_RUN_UBSAN=1 00:01:12.674 ++ RUN_NIGHTLY=0 00:01:12.674 + case $SPDK_TEST_NVMF_NICS in 00:01:12.674 + DRIVERS= 00:01:12.674 + [[ -n '' ]] 00:01:12.674 + exit 0 00:01:12.684 [Pipeline] } 00:01:12.701 [Pipeline] // withEnv 00:01:12.706 [Pipeline] } 00:01:12.723 [Pipeline] // stage 00:01:12.733 [Pipeline] catchError 00:01:12.735 [Pipeline] { 00:01:12.750 [Pipeline] timeout 00:01:12.750 Timeout set to expire in 50 min 00:01:12.752 [Pipeline] { 00:01:12.767 [Pipeline] stage 00:01:12.769 [Pipeline] { (Tests) 00:01:12.785 [Pipeline] sh 00:01:13.072 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/dsa-phy-autotest 00:01:13.072 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest 00:01:13.072 + DIR_ROOT=/var/jenkins/workspace/dsa-phy-autotest 00:01:13.072 + [[ -n /var/jenkins/workspace/dsa-phy-autotest ]] 00:01:13.072 + DIR_SPDK=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:13.072 + DIR_OUTPUT=/var/jenkins/workspace/dsa-phy-autotest/output 00:01:13.072 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/spdk ]] 00:01:13.072 + [[ ! -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:01:13.072 + mkdir -p /var/jenkins/workspace/dsa-phy-autotest/output 00:01:13.072 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:01:13.072 + cd /var/jenkins/workspace/dsa-phy-autotest 00:01:13.072 + source /etc/os-release 00:01:13.072 ++ NAME='Fedora Linux' 00:01:13.072 ++ VERSION='38 (Cloud Edition)' 00:01:13.072 ++ ID=fedora 00:01:13.072 ++ VERSION_ID=38 00:01:13.072 ++ VERSION_CODENAME= 00:01:13.072 ++ PLATFORM_ID=platform:f38 00:01:13.072 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:13.072 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:13.072 ++ LOGO=fedora-logo-icon 00:01:13.072 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:13.072 ++ HOME_URL=https://fedoraproject.org/ 00:01:13.072 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:13.072 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:13.073 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:13.073 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:13.073 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:13.073 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:13.073 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:13.073 ++ SUPPORT_END=2024-05-14 00:01:13.073 ++ VARIANT='Cloud Edition' 00:01:13.073 ++ VARIANT_ID=cloud 00:01:13.073 + uname -a 00:01:13.073 Linux spdk-fcp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:13.073 + sudo /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:01:15.615 Hugepages 00:01:15.615 node hugesize free / total 00:01:15.615 node0 1048576kB 0 / 0 00:01:15.615 node0 2048kB 0 / 0 00:01:15.615 node1 1048576kB 0 / 0 00:01:15.615 node1 2048kB 0 / 0 00:01:15.615 00:01:15.615 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.615 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:01:15.615 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:01:15.615 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:01:15.615 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:01:15.615 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:01:15.615 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:01:15.615 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:01:15.615 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:01:15.876 NVMe 0000:c9:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:15.876 NVMe 0000:ca:00.0 8086 0a54 1 nvme nvme2 nvme2n1 00:01:15.876 NVMe 0000:cb:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:01:15.876 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:01:15.876 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:01:15.876 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:01:15.876 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:01:15.876 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:01:15.876 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:01:15.876 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:01:15.876 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:01:15.876 + rm -f /tmp/spdk-ld-path 00:01:15.876 + source autorun-spdk.conf 00:01:15.876 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.876 ++ SPDK_TEST_ACCEL_DSA=1 00:01:15.877 ++ SPDK_TEST_ACCEL_IAA=1 00:01:15.877 ++ SPDK_TEST_NVMF=1 00:01:15.877 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.877 ++ SPDK_RUN_ASAN=1 00:01:15.877 ++ SPDK_RUN_UBSAN=1 00:01:15.877 ++ RUN_NIGHTLY=0 00:01:15.877 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.877 + [[ -n '' ]] 00:01:15.877 + sudo git config --global --add safe.directory /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:15.877 + for M in /var/spdk/build-*-manifest.txt 00:01:15.877 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.877 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:01:15.877 + for M in /var/spdk/build-*-manifest.txt 00:01:15.877 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.877 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:01:15.877 ++ uname 00:01:15.877 + [[ Linux == \L\i\n\u\x ]] 00:01:15.877 + sudo dmesg -T 00:01:16.138 + sudo dmesg --clear 00:01:16.139 + dmesg_pid=886486 00:01:16.139 + [[ Fedora Linux == FreeBSD ]] 00:01:16.139 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.139 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.139 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.139 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.139 + export FIO_BIN=/usr/src/fio-static/fio 00:01:16.139 + FIO_BIN=/usr/src/fio-static/fio 00:01:16.139 + sudo dmesg -Tw 00:01:16.139 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\d\s\a\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.139 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.139 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.139 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.139 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.139 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.139 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.139 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.139 + spdk/autorun.sh /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:01:16.139 Test configuration: 00:01:16.139 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.139 SPDK_TEST_ACCEL_DSA=1 00:01:16.139 SPDK_TEST_ACCEL_IAA=1 00:01:16.139 SPDK_TEST_NVMF=1 00:01:16.139 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.139 SPDK_RUN_ASAN=1 00:01:16.139 SPDK_RUN_UBSAN=1 00:01:16.139 RUN_NIGHTLY=0 21:06:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:01:16.139 21:06:30 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:16.139 21:06:30 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:16.139 21:06:30 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:16.139 21:06:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.139 21:06:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.139 21:06:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.139 21:06:30 -- paths/export.sh@5 -- $ export PATH 00:01:16.139 21:06:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.139 21:06:30 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:01:16.139 21:06:30 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:16.139 21:06:30 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713985590.XXXXXX 00:01:16.139 21:06:30 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713985590.JdL2ef 00:01:16.139 21:06:30 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:16.139 21:06:30 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:16.139 21:06:30 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:01:16.139 21:06:30 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:16.139 21:06:30 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:16.139 21:06:30 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:16.139 21:06:30 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:16.139 21:06:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.139 21:06:31 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:16.139 21:06:31 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:16.139 21:06:31 -- pm/common@17 -- $ local monitor 00:01:16.139 21:06:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.139 21:06:31 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=886520 00:01:16.139 21:06:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.139 21:06:31 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=886521 00:01:16.139 21:06:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.139 21:06:31 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=886523 00:01:16.139 21:06:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.139 21:06:31 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=886525 00:01:16.139 21:06:31 -- pm/common@26 -- $ sleep 1 00:01:16.139 21:06:31 -- pm/common@21 -- $ date +%s 00:01:16.139 21:06:31 -- pm/common@21 -- $ date +%s 00:01:16.139 21:06:31 -- pm/common@21 -- $ date +%s 00:01:16.139 21:06:31 -- pm/common@21 -- $ date +%s 00:01:16.139 21:06:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713985591 00:01:16.139 21:06:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713985591 00:01:16.139 21:06:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713985591 00:01:16.139 21:06:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713985591 00:01:16.139 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713985591_collect-vmstat.pm.log 00:01:16.139 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713985591_collect-cpu-temp.pm.log 00:01:16.139 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713985591_collect-bmc-pm.bmc.pm.log 00:01:16.139 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713985591_collect-cpu-load.pm.log 00:01:17.081 21:06:32 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:17.081 21:06:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.081 21:06:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.081 21:06:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:17.081 21:06:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.081 Wed Apr 24 07:06:32 PM UTC 2024 00:01:17.081 21:06:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.081 v24.05-pre-414-gea150257d 00:01:17.081 21:06:32 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:17.081 21:06:32 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:17.081 21:06:32 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:17.081 21:06:32 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:17.081 21:06:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.342 ************************************ 00:01:17.342 START TEST asan 00:01:17.342 ************************************ 00:01:17.342 21:06:32 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:01:17.342 using asan 00:01:17.342 00:01:17.342 real 0m0.000s 00:01:17.342 user 0m0.000s 00:01:17.342 sys 0m0.000s 00:01:17.342 21:06:32 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:17.342 21:06:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.342 ************************************ 00:01:17.342 END TEST asan 00:01:17.342 ************************************ 00:01:17.342 21:06:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.342 21:06:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.342 21:06:32 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:17.342 21:06:32 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:17.342 21:06:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.342 ************************************ 00:01:17.342 START TEST ubsan 00:01:17.342 ************************************ 00:01:17.342 21:06:32 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:17.342 using ubsan 00:01:17.342 00:01:17.342 real 0m0.000s 00:01:17.342 user 0m0.000s 00:01:17.342 sys 0m0.000s 00:01:17.342 21:06:32 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:17.342 21:06:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.342 ************************************ 00:01:17.342 END TEST ubsan 00:01:17.342 ************************************ 00:01:17.603 21:06:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.603 21:06:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.603 21:06:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.603 21:06:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:17.603 21:06:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.603 21:06:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:17.603 21:06:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:17.603 21:06:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:17.603 21:06:32 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:17.603 Using default SPDK env in /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:01:17.603 Using default DPDK in /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:01:17.864 Using 'verbs' RDMA provider 00:01:31.040 Configuring ISA-L (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:41.039 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:41.039 Creating mk/config.mk...done. 00:01:41.039 Creating mk/cc.flags.mk...done. 00:01:41.039 Type 'make' to build. 00:01:41.039 21:06:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j128 00:01:41.039 21:06:55 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:41.039 21:06:55 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:41.039 21:06:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.039 ************************************ 00:01:41.039 START TEST make 00:01:41.039 ************************************ 00:01:41.039 21:06:55 -- common/autotest_common.sh@1111 -- $ make -j128 00:01:41.039 make[1]: Nothing to be done for 'all'. 00:01:47.602 The Meson build system 00:01:47.602 Version: 1.3.1 00:01:47.602 Source dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk 00:01:47.602 Build dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp 00:01:47.602 Build type: native build 00:01:47.602 Program cat found: YES (/usr/bin/cat) 00:01:47.602 Project name: DPDK 00:01:47.602 Project version: 23.11.0 00:01:47.602 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:47.602 C linker for the host machine: cc ld.bfd 2.39-16 00:01:47.602 Host machine cpu family: x86_64 00:01:47.602 Host machine cpu: x86_64 00:01:47.602 Message: ## Building in Developer Mode ## 00:01:47.602 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:47.602 Program check-symbols.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:47.602 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:47.602 Program python3 found: YES (/usr/bin/python3) 00:01:47.602 Program cat found: YES (/usr/bin/cat) 00:01:47.602 Compiler for C supports arguments -march=native: YES 00:01:47.602 Checking for size of "void *" : 8 00:01:47.602 Checking for size of "void *" : 8 (cached) 00:01:47.602 Library m found: YES 00:01:47.602 Library numa found: YES 00:01:47.602 Has header "numaif.h" : YES 00:01:47.602 Library fdt found: NO 00:01:47.602 Library execinfo found: NO 00:01:47.602 Has header "execinfo.h" : YES 00:01:47.602 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:47.602 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:47.602 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:47.602 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:47.602 Run-time dependency openssl found: YES 3.0.9 00:01:47.602 Run-time dependency libpcap found: YES 1.10.4 00:01:47.602 Has header "pcap.h" with dependency libpcap: YES 00:01:47.602 Compiler for C supports arguments -Wcast-qual: YES 00:01:47.602 Compiler for C supports arguments -Wdeprecated: YES 00:01:47.602 Compiler for C supports arguments -Wformat: YES 00:01:47.602 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:47.602 Compiler for C supports arguments -Wformat-security: NO 00:01:47.602 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.602 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:47.602 Compiler for C supports arguments -Wnested-externs: YES 00:01:47.602 Compiler for C supports arguments -Wold-style-definition: YES 00:01:47.602 Compiler for C supports arguments -Wpointer-arith: YES 00:01:47.602 Compiler for C supports arguments -Wsign-compare: YES 00:01:47.602 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:47.602 Compiler for C supports arguments -Wundef: YES 00:01:47.602 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.602 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:47.602 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:47.602 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.602 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:47.602 Program objdump found: YES (/usr/bin/objdump) 00:01:47.602 Compiler for C supports arguments -mavx512f: YES 00:01:47.602 Checking if "AVX512 checking" compiles: YES 00:01:47.602 Fetching value of define "__SSE4_2__" : 1 00:01:47.602 Fetching value of define "__AES__" : 1 00:01:47.602 Fetching value of define "__AVX__" : 1 00:01:47.602 Fetching value of define "__AVX2__" : 1 00:01:47.602 Fetching value of define "__AVX512BW__" : 1 00:01:47.602 Fetching value of define "__AVX512CD__" : 1 00:01:47.602 Fetching value of define "__AVX512DQ__" : 1 00:01:47.602 Fetching value of define "__AVX512F__" : 1 00:01:47.602 Fetching value of define "__AVX512VL__" : 1 00:01:47.602 Fetching value of define "__PCLMUL__" : 1 00:01:47.602 Fetching value of define "__RDRND__" : 1 00:01:47.602 Fetching value of define "__RDSEED__" : 1 00:01:47.602 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:47.602 Fetching value of define "__znver1__" : (undefined) 00:01:47.602 Fetching value of define "__znver2__" : (undefined) 00:01:47.602 Fetching value of define "__znver3__" : (undefined) 00:01:47.602 Fetching value of define "__znver4__" : (undefined) 00:01:47.602 Library asan found: YES 00:01:47.602 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:47.602 Message: lib/log: Defining dependency "log" 00:01:47.602 Message: lib/kvargs: Defining dependency "kvargs" 00:01:47.602 Message: lib/telemetry: Defining dependency "telemetry" 00:01:47.602 Library rt found: YES 00:01:47.602 Checking for function "getentropy" : NO 00:01:47.602 Message: lib/eal: Defining dependency "eal" 00:01:47.602 Message: lib/ring: Defining dependency "ring" 00:01:47.602 Message: lib/rcu: Defining dependency "rcu" 00:01:47.602 Message: lib/mempool: Defining dependency "mempool" 00:01:47.602 Message: lib/mbuf: Defining dependency "mbuf" 00:01:47.602 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:47.602 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.602 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.602 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:47.602 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:47.602 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:47.602 Compiler for C supports arguments -mpclmul: YES 00:01:47.602 Compiler for C supports arguments -maes: YES 00:01:47.602 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.602 Compiler for C supports arguments -mavx512bw: YES 00:01:47.602 Compiler for C supports arguments -mavx512dq: YES 00:01:47.602 Compiler for C supports arguments -mavx512vl: YES 00:01:47.602 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:47.602 Compiler for C supports arguments -mavx2: YES 00:01:47.602 Compiler for C supports arguments -mavx: YES 00:01:47.603 Message: lib/net: Defining dependency "net" 00:01:47.603 Message: lib/meter: Defining dependency "meter" 00:01:47.603 Message: lib/ethdev: Defining dependency "ethdev" 00:01:47.603 Message: lib/pci: Defining dependency "pci" 00:01:47.603 Message: lib/cmdline: Defining dependency "cmdline" 00:01:47.603 Message: lib/hash: Defining dependency "hash" 00:01:47.603 Message: lib/timer: Defining dependency "timer" 00:01:47.603 Message: lib/compressdev: Defining dependency "compressdev" 00:01:47.603 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:47.603 Message: lib/dmadev: Defining dependency "dmadev" 00:01:47.603 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:47.603 Message: lib/power: Defining dependency "power" 00:01:47.603 Message: lib/reorder: Defining dependency "reorder" 00:01:47.603 Message: lib/security: Defining dependency "security" 00:01:47.603 Has header "linux/userfaultfd.h" : YES 00:01:47.603 Has header "linux/vduse.h" : YES 00:01:47.603 Message: lib/vhost: Defining dependency "vhost" 00:01:47.603 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:47.603 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:47.603 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:47.603 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:47.603 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:47.603 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:47.603 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:47.603 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:47.603 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:47.603 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:47.603 Program doxygen found: YES (/usr/bin/doxygen) 00:01:47.603 Configuring doxy-api-html.conf using configuration 00:01:47.603 Configuring doxy-api-man.conf using configuration 00:01:47.603 Program mandb found: YES (/usr/bin/mandb) 00:01:47.603 Program sphinx-build found: NO 00:01:47.603 Configuring rte_build_config.h using configuration 00:01:47.603 Message: 00:01:47.603 ================= 00:01:47.603 Applications Enabled 00:01:47.603 ================= 00:01:47.603 00:01:47.603 apps: 00:01:47.603 00:01:47.603 00:01:47.603 Message: 00:01:47.603 ================= 00:01:47.603 Libraries Enabled 00:01:47.603 ================= 00:01:47.603 00:01:47.603 libs: 00:01:47.603 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:47.603 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:47.603 cryptodev, dmadev, power, reorder, security, vhost, 00:01:47.603 00:01:47.603 Message: 00:01:47.603 =============== 00:01:47.603 Drivers Enabled 00:01:47.603 =============== 00:01:47.603 00:01:47.603 common: 00:01:47.603 00:01:47.603 bus: 00:01:47.603 pci, vdev, 00:01:47.603 mempool: 00:01:47.603 ring, 00:01:47.603 dma: 00:01:47.603 00:01:47.603 net: 00:01:47.603 00:01:47.603 crypto: 00:01:47.603 00:01:47.603 compress: 00:01:47.603 00:01:47.603 vdpa: 00:01:47.603 00:01:47.603 00:01:47.603 Message: 00:01:47.603 ================= 00:01:47.603 Content Skipped 00:01:47.603 ================= 00:01:47.603 00:01:47.603 apps: 00:01:47.603 dumpcap: explicitly disabled via build config 00:01:47.603 graph: explicitly disabled via build config 00:01:47.603 pdump: explicitly disabled via build config 00:01:47.603 proc-info: explicitly disabled via build config 00:01:47.603 test-acl: explicitly disabled via build config 00:01:47.603 test-bbdev: explicitly disabled via build config 00:01:47.603 test-cmdline: explicitly disabled via build config 00:01:47.603 test-compress-perf: explicitly disabled via build config 00:01:47.603 test-crypto-perf: explicitly disabled via build config 00:01:47.603 test-dma-perf: explicitly disabled via build config 00:01:47.603 test-eventdev: explicitly disabled via build config 00:01:47.603 test-fib: explicitly disabled via build config 00:01:47.603 test-flow-perf: explicitly disabled via build config 00:01:47.603 test-gpudev: explicitly disabled via build config 00:01:47.603 test-mldev: explicitly disabled via build config 00:01:47.603 test-pipeline: explicitly disabled via build config 00:01:47.603 test-pmd: explicitly disabled via build config 00:01:47.603 test-regex: explicitly disabled via build config 00:01:47.603 test-sad: explicitly disabled via build config 00:01:47.603 test-security-perf: explicitly disabled via build config 00:01:47.603 00:01:47.603 libs: 00:01:47.603 metrics: explicitly disabled via build config 00:01:47.603 acl: explicitly disabled via build config 00:01:47.603 bbdev: explicitly disabled via build config 00:01:47.603 bitratestats: explicitly disabled via build config 00:01:47.603 bpf: explicitly disabled via build config 00:01:47.603 cfgfile: explicitly disabled via build config 00:01:47.603 distributor: explicitly disabled via build config 00:01:47.603 efd: explicitly disabled via build config 00:01:47.603 eventdev: explicitly disabled via build config 00:01:47.603 dispatcher: explicitly disabled via build config 00:01:47.603 gpudev: explicitly disabled via build config 00:01:47.603 gro: explicitly disabled via build config 00:01:47.603 gso: explicitly disabled via build config 00:01:47.603 ip_frag: explicitly disabled via build config 00:01:47.603 jobstats: explicitly disabled via build config 00:01:47.603 latencystats: explicitly disabled via build config 00:01:47.603 lpm: explicitly disabled via build config 00:01:47.603 member: explicitly disabled via build config 00:01:47.603 pcapng: explicitly disabled via build config 00:01:47.603 rawdev: explicitly disabled via build config 00:01:47.603 regexdev: explicitly disabled via build config 00:01:47.603 mldev: explicitly disabled via build config 00:01:47.603 rib: explicitly disabled via build config 00:01:47.603 sched: explicitly disabled via build config 00:01:47.603 stack: explicitly disabled via build config 00:01:47.603 ipsec: explicitly disabled via build config 00:01:47.603 pdcp: explicitly disabled via build config 00:01:47.603 fib: explicitly disabled via build config 00:01:47.603 port: explicitly disabled via build config 00:01:47.603 pdump: explicitly disabled via build config 00:01:47.603 table: explicitly disabled via build config 00:01:47.603 pipeline: explicitly disabled via build config 00:01:47.603 graph: explicitly disabled via build config 00:01:47.603 node: explicitly disabled via build config 00:01:47.603 00:01:47.603 drivers: 00:01:47.603 common/cpt: not in enabled drivers build config 00:01:47.603 common/dpaax: not in enabled drivers build config 00:01:47.603 common/iavf: not in enabled drivers build config 00:01:47.603 common/idpf: not in enabled drivers build config 00:01:47.603 common/mvep: not in enabled drivers build config 00:01:47.603 common/octeontx: not in enabled drivers build config 00:01:47.603 bus/auxiliary: not in enabled drivers build config 00:01:47.603 bus/cdx: not in enabled drivers build config 00:01:47.603 bus/dpaa: not in enabled drivers build config 00:01:47.603 bus/fslmc: not in enabled drivers build config 00:01:47.603 bus/ifpga: not in enabled drivers build config 00:01:47.603 bus/platform: not in enabled drivers build config 00:01:47.603 bus/vmbus: not in enabled drivers build config 00:01:47.603 common/cnxk: not in enabled drivers build config 00:01:47.603 common/mlx5: not in enabled drivers build config 00:01:47.603 common/nfp: not in enabled drivers build config 00:01:47.603 common/qat: not in enabled drivers build config 00:01:47.603 common/sfc_efx: not in enabled drivers build config 00:01:47.603 mempool/bucket: not in enabled drivers build config 00:01:47.603 mempool/cnxk: not in enabled drivers build config 00:01:47.603 mempool/dpaa: not in enabled drivers build config 00:01:47.603 mempool/dpaa2: not in enabled drivers build config 00:01:47.603 mempool/octeontx: not in enabled drivers build config 00:01:47.603 mempool/stack: not in enabled drivers build config 00:01:47.603 dma/cnxk: not in enabled drivers build config 00:01:47.603 dma/dpaa: not in enabled drivers build config 00:01:47.603 dma/dpaa2: not in enabled drivers build config 00:01:47.603 dma/hisilicon: not in enabled drivers build config 00:01:47.603 dma/idxd: not in enabled drivers build config 00:01:47.603 dma/ioat: not in enabled drivers build config 00:01:47.603 dma/skeleton: not in enabled drivers build config 00:01:47.603 net/af_packet: not in enabled drivers build config 00:01:47.603 net/af_xdp: not in enabled drivers build config 00:01:47.603 net/ark: not in enabled drivers build config 00:01:47.603 net/atlantic: not in enabled drivers build config 00:01:47.603 net/avp: not in enabled drivers build config 00:01:47.603 net/axgbe: not in enabled drivers build config 00:01:47.603 net/bnx2x: not in enabled drivers build config 00:01:47.603 net/bnxt: not in enabled drivers build config 00:01:47.603 net/bonding: not in enabled drivers build config 00:01:47.603 net/cnxk: not in enabled drivers build config 00:01:47.603 net/cpfl: not in enabled drivers build config 00:01:47.603 net/cxgbe: not in enabled drivers build config 00:01:47.603 net/dpaa: not in enabled drivers build config 00:01:47.603 net/dpaa2: not in enabled drivers build config 00:01:47.603 net/e1000: not in enabled drivers build config 00:01:47.603 net/ena: not in enabled drivers build config 00:01:47.603 net/enetc: not in enabled drivers build config 00:01:47.603 net/enetfec: not in enabled drivers build config 00:01:47.603 net/enic: not in enabled drivers build config 00:01:47.603 net/failsafe: not in enabled drivers build config 00:01:47.603 net/fm10k: not in enabled drivers build config 00:01:47.603 net/gve: not in enabled drivers build config 00:01:47.603 net/hinic: not in enabled drivers build config 00:01:47.603 net/hns3: not in enabled drivers build config 00:01:47.603 net/i40e: not in enabled drivers build config 00:01:47.604 net/iavf: not in enabled drivers build config 00:01:47.604 net/ice: not in enabled drivers build config 00:01:47.604 net/idpf: not in enabled drivers build config 00:01:47.604 net/igc: not in enabled drivers build config 00:01:47.604 net/ionic: not in enabled drivers build config 00:01:47.604 net/ipn3ke: not in enabled drivers build config 00:01:47.604 net/ixgbe: not in enabled drivers build config 00:01:47.604 net/mana: not in enabled drivers build config 00:01:47.604 net/memif: not in enabled drivers build config 00:01:47.604 net/mlx4: not in enabled drivers build config 00:01:47.604 net/mlx5: not in enabled drivers build config 00:01:47.604 net/mvneta: not in enabled drivers build config 00:01:47.604 net/mvpp2: not in enabled drivers build config 00:01:47.604 net/netvsc: not in enabled drivers build config 00:01:47.604 net/nfb: not in enabled drivers build config 00:01:47.604 net/nfp: not in enabled drivers build config 00:01:47.604 net/ngbe: not in enabled drivers build config 00:01:47.604 net/null: not in enabled drivers build config 00:01:47.604 net/octeontx: not in enabled drivers build config 00:01:47.604 net/octeon_ep: not in enabled drivers build config 00:01:47.604 net/pcap: not in enabled drivers build config 00:01:47.604 net/pfe: not in enabled drivers build config 00:01:47.604 net/qede: not in enabled drivers build config 00:01:47.604 net/ring: not in enabled drivers build config 00:01:47.604 net/sfc: not in enabled drivers build config 00:01:47.604 net/softnic: not in enabled drivers build config 00:01:47.604 net/tap: not in enabled drivers build config 00:01:47.604 net/thunderx: not in enabled drivers build config 00:01:47.604 net/txgbe: not in enabled drivers build config 00:01:47.604 net/vdev_netvsc: not in enabled drivers build config 00:01:47.604 net/vhost: not in enabled drivers build config 00:01:47.604 net/virtio: not in enabled drivers build config 00:01:47.604 net/vmxnet3: not in enabled drivers build config 00:01:47.604 raw/*: missing internal dependency, "rawdev" 00:01:47.604 crypto/armv8: not in enabled drivers build config 00:01:47.604 crypto/bcmfs: not in enabled drivers build config 00:01:47.604 crypto/caam_jr: not in enabled drivers build config 00:01:47.604 crypto/ccp: not in enabled drivers build config 00:01:47.604 crypto/cnxk: not in enabled drivers build config 00:01:47.604 crypto/dpaa_sec: not in enabled drivers build config 00:01:47.604 crypto/dpaa2_sec: not in enabled drivers build config 00:01:47.604 crypto/ipsec_mb: not in enabled drivers build config 00:01:47.604 crypto/mlx5: not in enabled drivers build config 00:01:47.604 crypto/mvsam: not in enabled drivers build config 00:01:47.604 crypto/nitrox: not in enabled drivers build config 00:01:47.604 crypto/null: not in enabled drivers build config 00:01:47.604 crypto/octeontx: not in enabled drivers build config 00:01:47.604 crypto/openssl: not in enabled drivers build config 00:01:47.604 crypto/scheduler: not in enabled drivers build config 00:01:47.604 crypto/uadk: not in enabled drivers build config 00:01:47.604 crypto/virtio: not in enabled drivers build config 00:01:47.604 compress/isal: not in enabled drivers build config 00:01:47.604 compress/mlx5: not in enabled drivers build config 00:01:47.604 compress/octeontx: not in enabled drivers build config 00:01:47.604 compress/zlib: not in enabled drivers build config 00:01:47.604 regex/*: missing internal dependency, "regexdev" 00:01:47.604 ml/*: missing internal dependency, "mldev" 00:01:47.604 vdpa/ifc: not in enabled drivers build config 00:01:47.604 vdpa/mlx5: not in enabled drivers build config 00:01:47.604 vdpa/nfp: not in enabled drivers build config 00:01:47.604 vdpa/sfc: not in enabled drivers build config 00:01:47.604 event/*: missing internal dependency, "eventdev" 00:01:47.604 baseband/*: missing internal dependency, "bbdev" 00:01:47.604 gpu/*: missing internal dependency, "gpudev" 00:01:47.604 00:01:47.604 00:01:47.604 Build targets in project: 84 00:01:47.604 00:01:47.604 DPDK 23.11.0 00:01:47.604 00:01:47.604 User defined options 00:01:47.604 buildtype : debug 00:01:47.604 default_library : shared 00:01:47.604 libdir : lib 00:01:47.604 prefix : /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:01:47.604 b_sanitize : address 00:01:47.604 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:47.604 c_link_args : 00:01:47.604 cpu_instruction_set: native 00:01:47.604 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:47.604 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:47.604 enable_docs : false 00:01:47.604 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:47.604 enable_kmods : false 00:01:47.604 tests : false 00:01:47.604 00:01:47.604 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:47.604 ninja: Entering directory `/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp' 00:01:47.604 [1/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:47.604 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:47.604 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:47.604 [4/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:47.604 [5/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:47.604 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:47.604 [7/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.604 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.604 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.604 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:47.604 [11/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.604 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:47.604 [13/264] Linking static target lib/librte_kvargs.a 00:01:47.604 [14/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:47.604 [15/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.604 [16/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:47.604 [17/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:47.604 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:47.604 [19/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.604 [20/264] Linking static target lib/librte_log.a 00:01:47.604 [21/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.604 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:47.604 [23/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:47.604 [24/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:47.604 [25/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:47.604 [26/264] Linking static target lib/librte_pci.a 00:01:47.604 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:47.604 [28/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:47.604 [29/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:47.604 [30/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:47.604 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:47.604 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:47.604 [33/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:47.604 [34/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.604 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:47.604 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:47.604 [37/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:47.604 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:47.604 [39/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:47.604 [40/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:47.604 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.604 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:47.604 [43/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:47.604 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:47.604 [45/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:47.604 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:47.604 [47/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:47.604 [48/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:47.604 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:47.604 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:47.604 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:47.604 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.863 [53/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:47.863 [54/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:47.863 [55/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:47.863 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:47.863 [57/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:47.863 [58/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:47.863 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:47.864 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:47.864 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.864 [62/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:47.864 [63/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:47.864 [64/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.864 [65/264] Linking static target lib/librte_ring.a 00:01:47.864 [66/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:47.864 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:47.864 [68/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:47.864 [69/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.864 [70/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.864 [71/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:47.864 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.864 [73/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:47.864 [74/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:47.864 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:47.864 [76/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:47.864 [77/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:47.864 [78/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:47.864 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:47.864 [80/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:47.864 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.864 [82/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:47.864 [83/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:47.864 [84/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:47.864 [85/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:47.864 [86/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:47.864 [87/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:47.864 [88/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.864 [89/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:47.864 [90/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.864 [91/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:47.864 [92/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:47.864 [93/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.864 [94/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:47.864 [95/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.864 [96/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:47.864 [97/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:47.864 [98/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.864 [99/264] Linking static target lib/librte_meter.a 00:01:47.864 [100/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:47.864 [101/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.864 [102/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:47.864 [103/264] Linking static target lib/librte_dmadev.a 00:01:47.864 [104/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:47.864 [105/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.864 [106/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:47.864 [107/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.864 [108/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.864 [109/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.864 [110/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:47.864 [111/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:47.864 [112/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:47.864 [113/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:47.864 [114/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:47.864 [115/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.864 [116/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.123 [117/264] Linking static target lib/librte_cmdline.a 00:01:48.123 [118/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.123 [119/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.123 [120/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:48.123 [121/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.123 [122/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.123 [123/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:48.123 [124/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:48.123 [125/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.123 [126/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:48.123 [127/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.123 [128/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.123 [129/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.123 [130/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:48.123 [131/264] Linking static target lib/librte_reorder.a 00:01:48.123 [132/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.123 [133/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:48.123 [134/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.123 [135/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.123 [136/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.123 [137/264] Linking static target lib/librte_mempool.a 00:01:48.123 [138/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.123 [139/264] Linking target lib/librte_log.so.24.0 00:01:48.123 [140/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:48.123 [141/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:48.123 [142/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.123 [143/264] Linking static target lib/librte_telemetry.a 00:01:48.123 [144/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.123 [145/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.123 [146/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:48.123 [147/264] Linking static target lib/librte_timer.a 00:01:48.123 [148/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.123 [149/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:48.123 [150/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.123 [151/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.123 [152/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:48.123 [153/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.123 [154/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:48.123 [155/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:48.123 [156/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.123 [157/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:48.123 [158/264] Linking static target lib/librte_power.a 00:01:48.123 [159/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.123 [160/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:48.123 [161/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:48.123 [162/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:48.123 [163/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.123 [164/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:48.123 [165/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:48.123 [166/264] Linking target lib/librte_kvargs.so.24.0 00:01:48.123 [167/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:48.123 [168/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:48.123 [169/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.123 [170/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.123 [171/264] Linking static target lib/librte_net.a 00:01:48.123 [172/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.123 [173/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:48.123 [174/264] Linking static target lib/librte_rcu.a 00:01:48.123 [175/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.381 [176/264] Linking static target lib/librte_compressdev.a 00:01:48.381 [177/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:48.381 [178/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.381 [179/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.381 [180/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.381 [181/264] Linking static target drivers/librte_bus_vdev.a 00:01:48.381 [182/264] Linking static target lib/librte_eal.a 00:01:48.381 [183/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.381 [184/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:48.381 [185/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:48.381 [186/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:48.381 [187/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:48.381 [188/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:48.381 [189/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.381 [190/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.381 [191/264] Linking static target drivers/librte_mempool_ring.a 00:01:48.381 [192/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.381 [193/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.381 [194/264] Linking static target drivers/librte_bus_pci.a 00:01:48.381 [195/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.381 [196/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:48.381 [197/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:48.381 [198/264] Linking static target lib/librte_security.a 00:01:48.381 [199/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.382 [200/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.382 [201/264] Linking target lib/librte_telemetry.so.24.0 00:01:48.382 [202/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:48.382 [203/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.382 [204/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.382 [205/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:48.382 [206/264] Linking static target lib/librte_hash.a 00:01:48.382 [207/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.382 [208/264] Linking static target lib/librte_mbuf.a 00:01:48.640 [209/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:48.640 [210/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.640 [211/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:48.640 [212/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.640 [213/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.640 [214/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.640 [215/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.640 [216/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.898 [217/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:48.898 [218/264] Linking static target lib/librte_cryptodev.a 00:01:48.898 [219/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.898 [220/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.465 [221/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:49.465 [222/264] Linking static target lib/librte_ethdev.a 00:01:49.723 [223/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.981 [224/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.880 [225/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:51.880 [226/264] Linking static target lib/librte_vhost.a 00:01:53.255 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.693 [228/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.693 [229/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.693 [230/264] Linking target lib/librte_eal.so.24.0 00:01:54.693 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:54.693 [232/264] Linking target lib/librte_pci.so.24.0 00:01:54.693 [233/264] Linking target lib/librte_dmadev.so.24.0 00:01:54.693 [234/264] Linking target lib/librte_timer.so.24.0 00:01:54.693 [235/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:54.693 [236/264] Linking target lib/librte_ring.so.24.0 00:01:54.693 [237/264] Linking target lib/librte_meter.so.24.0 00:01:54.693 [238/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:54.693 [239/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:54.693 [240/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:54.693 [241/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:54.693 [242/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:54.693 [243/264] Linking target lib/librte_rcu.so.24.0 00:01:54.693 [244/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:54.693 [245/264] Linking target lib/librte_mempool.so.24.0 00:01:54.951 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:54.951 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:54.951 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:54.951 [249/264] Linking target lib/librte_mbuf.so.24.0 00:01:54.951 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:54.951 [251/264] Linking target lib/librte_cryptodev.so.24.0 00:01:54.951 [252/264] Linking target lib/librte_net.so.24.0 00:01:54.951 [253/264] Linking target lib/librte_compressdev.so.24.0 00:01:54.951 [254/264] Linking target lib/librte_reorder.so.24.0 00:01:55.209 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:55.209 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:55.209 [257/264] Linking target lib/librte_security.so.24.0 00:01:55.209 [258/264] Linking target lib/librte_cmdline.so.24.0 00:01:55.209 [259/264] Linking target lib/librte_hash.so.24.0 00:01:55.209 [260/264] Linking target lib/librte_ethdev.so.24.0 00:01:55.209 [261/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:55.209 [262/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:55.474 [263/264] Linking target lib/librte_power.so.24.0 00:01:55.474 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:55.474 INFO: autodetecting backend as ninja 00:01:55.474 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp -j 128 00:01:56.039 CC lib/log/log.o 00:01:56.039 CC lib/log/log_flags.o 00:01:56.039 CC lib/log/log_deprecated.o 00:01:56.039 CC lib/ut/ut.o 00:01:56.039 CC lib/ut_mock/mock.o 00:01:56.298 LIB libspdk_log.a 00:01:56.298 LIB libspdk_ut.a 00:01:56.298 SO libspdk_log.so.7.0 00:01:56.298 LIB libspdk_ut_mock.a 00:01:56.298 SO libspdk_ut.so.2.0 00:01:56.298 SO libspdk_ut_mock.so.6.0 00:01:56.298 SYMLINK libspdk_log.so 00:01:56.298 SYMLINK libspdk_ut.so 00:01:56.298 SYMLINK libspdk_ut_mock.so 00:01:56.556 CC lib/dma/dma.o 00:01:56.556 CC lib/util/base64.o 00:01:56.556 CC lib/util/bit_array.o 00:01:56.556 CC lib/util/crc32.o 00:01:56.556 CC lib/util/cpuset.o 00:01:56.556 CC lib/util/crc32_ieee.o 00:01:56.556 CC lib/util/crc16.o 00:01:56.556 CC lib/util/crc32c.o 00:01:56.556 CC lib/util/fd.o 00:01:56.556 CC lib/util/file.o 00:01:56.556 CC lib/util/crc64.o 00:01:56.556 CC lib/util/iov.o 00:01:56.556 CC lib/util/dif.o 00:01:56.556 CC lib/ioat/ioat.o 00:01:56.556 CC lib/util/hexlify.o 00:01:56.556 CC lib/util/math.o 00:01:56.556 CC lib/util/pipe.o 00:01:56.556 CC lib/util/strerror_tls.o 00:01:56.556 CC lib/util/uuid.o 00:01:56.556 CXX lib/trace_parser/trace.o 00:01:56.556 CC lib/util/string.o 00:01:56.556 CC lib/util/fd_group.o 00:01:56.556 CC lib/util/xor.o 00:01:56.556 CC lib/util/zipf.o 00:01:56.556 CC lib/vfio_user/host/vfio_user.o 00:01:56.556 CC lib/vfio_user/host/vfio_user_pci.o 00:01:56.556 LIB libspdk_dma.a 00:01:56.556 SO libspdk_dma.so.4.0 00:01:56.556 SYMLINK libspdk_dma.so 00:01:56.816 LIB libspdk_ioat.a 00:01:56.816 SO libspdk_ioat.so.7.0 00:01:56.816 LIB libspdk_vfio_user.a 00:01:56.816 SYMLINK libspdk_ioat.so 00:01:56.816 SO libspdk_vfio_user.so.5.0 00:01:56.816 SYMLINK libspdk_vfio_user.so 00:01:57.076 LIB libspdk_util.a 00:01:57.076 LIB libspdk_trace_parser.a 00:01:57.076 SO libspdk_trace_parser.so.5.0 00:01:57.076 SO libspdk_util.so.9.0 00:01:57.335 SYMLINK libspdk_trace_parser.so 00:01:57.335 SYMLINK libspdk_util.so 00:01:57.593 CC lib/vmd/vmd.o 00:01:57.593 CC lib/vmd/led.o 00:01:57.593 CC lib/env_dpdk/env.o 00:01:57.593 CC lib/env_dpdk/init.o 00:01:57.593 CC lib/env_dpdk/memory.o 00:01:57.593 CC lib/env_dpdk/pci.o 00:01:57.593 CC lib/env_dpdk/threads.o 00:01:57.593 CC lib/env_dpdk/pci_ioat.o 00:01:57.593 CC lib/env_dpdk/pci_virtio.o 00:01:57.593 CC lib/json/json_parse.o 00:01:57.593 CC lib/env_dpdk/pci_vmd.o 00:01:57.593 CC lib/env_dpdk/pci_idxd.o 00:01:57.593 CC lib/json/json_util.o 00:01:57.593 CC lib/env_dpdk/pci_event.o 00:01:57.593 CC lib/env_dpdk/sigbus_handler.o 00:01:57.593 CC lib/env_dpdk/pci_dpdk.o 00:01:57.593 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:57.593 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:57.593 CC lib/json/json_write.o 00:01:57.593 CC lib/rdma/rdma_verbs.o 00:01:57.593 CC lib/rdma/common.o 00:01:57.593 CC lib/idxd/idxd_user.o 00:01:57.593 CC lib/idxd/idxd.o 00:01:57.593 CC lib/conf/conf.o 00:01:57.851 LIB libspdk_rdma.a 00:01:57.851 SO libspdk_rdma.so.6.0 00:01:57.851 LIB libspdk_conf.a 00:01:57.851 LIB libspdk_json.a 00:01:57.851 SO libspdk_conf.so.6.0 00:01:57.851 SYMLINK libspdk_rdma.so 00:01:57.851 SO libspdk_json.so.6.0 00:01:57.851 SYMLINK libspdk_conf.so 00:01:57.851 SYMLINK libspdk_json.so 00:01:58.108 LIB libspdk_vmd.a 00:01:58.108 SO libspdk_vmd.so.6.0 00:01:58.108 CC lib/jsonrpc/jsonrpc_server.o 00:01:58.108 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:58.108 CC lib/jsonrpc/jsonrpc_client.o 00:01:58.108 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:58.108 SYMLINK libspdk_vmd.so 00:01:58.108 LIB libspdk_idxd.a 00:01:58.108 SO libspdk_idxd.so.12.0 00:01:58.108 SYMLINK libspdk_idxd.so 00:01:58.366 LIB libspdk_jsonrpc.a 00:01:58.366 SO libspdk_jsonrpc.so.6.0 00:01:58.366 SYMLINK libspdk_jsonrpc.so 00:01:58.625 CC lib/rpc/rpc.o 00:01:58.882 LIB libspdk_rpc.a 00:01:58.882 SO libspdk_rpc.so.6.0 00:01:58.882 SYMLINK libspdk_rpc.so 00:01:59.140 LIB libspdk_env_dpdk.a 00:01:59.140 SO libspdk_env_dpdk.so.14.0 00:01:59.140 CC lib/notify/notify.o 00:01:59.141 CC lib/notify/notify_rpc.o 00:01:59.141 CC lib/trace/trace.o 00:01:59.141 CC lib/trace/trace_flags.o 00:01:59.141 CC lib/trace/trace_rpc.o 00:01:59.141 CC lib/keyring/keyring.o 00:01:59.141 CC lib/keyring/keyring_rpc.o 00:01:59.141 SYMLINK libspdk_env_dpdk.so 00:01:59.399 LIB libspdk_notify.a 00:01:59.399 LIB libspdk_trace.a 00:01:59.399 SO libspdk_notify.so.6.0 00:01:59.399 SO libspdk_trace.so.10.0 00:01:59.399 SYMLINK libspdk_trace.so 00:01:59.399 SYMLINK libspdk_notify.so 00:01:59.399 LIB libspdk_keyring.a 00:01:59.399 SO libspdk_keyring.so.1.0 00:01:59.399 SYMLINK libspdk_keyring.so 00:01:59.656 CC lib/sock/sock_rpc.o 00:01:59.656 CC lib/sock/sock.o 00:01:59.656 CC lib/thread/thread.o 00:01:59.656 CC lib/thread/iobuf.o 00:02:00.223 LIB libspdk_sock.a 00:02:00.223 SO libspdk_sock.so.9.0 00:02:00.223 SYMLINK libspdk_sock.so 00:02:00.223 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:00.223 CC lib/nvme/nvme_ctrlr.o 00:02:00.223 CC lib/nvme/nvme_ns.o 00:02:00.223 CC lib/nvme/nvme_fabric.o 00:02:00.223 CC lib/nvme/nvme_ns_cmd.o 00:02:00.223 CC lib/nvme/nvme_pcie_common.o 00:02:00.223 CC lib/nvme/nvme_pcie.o 00:02:00.223 CC lib/nvme/nvme_qpair.o 00:02:00.223 CC lib/nvme/nvme.o 00:02:00.223 CC lib/nvme/nvme_quirks.o 00:02:00.223 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:00.223 CC lib/nvme/nvme_transport.o 00:02:00.223 CC lib/nvme/nvme_discovery.o 00:02:00.223 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:00.223 CC lib/nvme/nvme_tcp.o 00:02:00.223 CC lib/nvme/nvme_opal.o 00:02:00.223 CC lib/nvme/nvme_io_msg.o 00:02:00.223 CC lib/nvme/nvme_zns.o 00:02:00.223 CC lib/nvme/nvme_poll_group.o 00:02:00.223 CC lib/nvme/nvme_stubs.o 00:02:00.223 CC lib/nvme/nvme_auth.o 00:02:00.223 CC lib/nvme/nvme_rdma.o 00:02:00.223 CC lib/nvme/nvme_cuse.o 00:02:00.789 LIB libspdk_thread.a 00:02:00.789 SO libspdk_thread.so.10.0 00:02:00.789 SYMLINK libspdk_thread.so 00:02:01.046 CC lib/blob/blobstore.o 00:02:01.046 CC lib/blob/zeroes.o 00:02:01.046 CC lib/blob/request.o 00:02:01.046 CC lib/blob/blob_bs_dev.o 00:02:01.046 CC lib/accel/accel.o 00:02:01.046 CC lib/accel/accel_rpc.o 00:02:01.046 CC lib/accel/accel_sw.o 00:02:01.046 CC lib/init/subsystem.o 00:02:01.046 CC lib/init/subsystem_rpc.o 00:02:01.046 CC lib/init/rpc.o 00:02:01.046 CC lib/init/json_config.o 00:02:01.046 CC lib/virtio/virtio.o 00:02:01.046 CC lib/virtio/virtio_vhost_user.o 00:02:01.046 CC lib/virtio/virtio_pci.o 00:02:01.046 CC lib/virtio/virtio_vfio_user.o 00:02:01.305 LIB libspdk_init.a 00:02:01.305 SO libspdk_init.so.5.0 00:02:01.305 SYMLINK libspdk_init.so 00:02:01.305 LIB libspdk_virtio.a 00:02:01.563 SO libspdk_virtio.so.7.0 00:02:01.563 SYMLINK libspdk_virtio.so 00:02:01.563 CC lib/event/reactor.o 00:02:01.563 CC lib/event/app.o 00:02:01.563 CC lib/event/log_rpc.o 00:02:01.563 CC lib/event/app_rpc.o 00:02:01.563 CC lib/event/scheduler_static.o 00:02:01.821 LIB libspdk_accel.a 00:02:01.821 SO libspdk_accel.so.15.0 00:02:01.821 LIB libspdk_nvme.a 00:02:01.821 SYMLINK libspdk_accel.so 00:02:02.079 SO libspdk_nvme.so.13.0 00:02:02.079 LIB libspdk_event.a 00:02:02.079 SO libspdk_event.so.13.0 00:02:02.079 CC lib/bdev/bdev.o 00:02:02.079 CC lib/bdev/bdev_rpc.o 00:02:02.079 CC lib/bdev/bdev_zone.o 00:02:02.079 CC lib/bdev/part.o 00:02:02.079 CC lib/bdev/scsi_nvme.o 00:02:02.079 SYMLINK libspdk_event.so 00:02:02.336 SYMLINK libspdk_nvme.so 00:02:04.238 LIB libspdk_blob.a 00:02:04.238 SO libspdk_blob.so.11.0 00:02:04.238 SYMLINK libspdk_blob.so 00:02:04.238 LIB libspdk_bdev.a 00:02:04.238 SO libspdk_bdev.so.15.0 00:02:04.238 SYMLINK libspdk_bdev.so 00:02:04.238 CC lib/blobfs/blobfs.o 00:02:04.238 CC lib/blobfs/tree.o 00:02:04.238 CC lib/lvol/lvol.o 00:02:04.497 CC lib/ublk/ublk_rpc.o 00:02:04.497 CC lib/ublk/ublk.o 00:02:04.497 CC lib/nvmf/ctrlr.o 00:02:04.497 CC lib/nvmf/ctrlr_bdev.o 00:02:04.497 CC lib/nvmf/ctrlr_discovery.o 00:02:04.497 CC lib/nbd/nbd.o 00:02:04.497 CC lib/nvmf/nvmf.o 00:02:04.497 CC lib/nvmf/subsystem.o 00:02:04.497 CC lib/nvmf/tcp.o 00:02:04.497 CC lib/nbd/nbd_rpc.o 00:02:04.497 CC lib/nvmf/nvmf_rpc.o 00:02:04.497 CC lib/nvmf/transport.o 00:02:04.497 CC lib/nvmf/rdma.o 00:02:04.497 CC lib/scsi/dev.o 00:02:04.497 CC lib/scsi/lun.o 00:02:04.497 CC lib/scsi/scsi_bdev.o 00:02:04.497 CC lib/scsi/port.o 00:02:04.497 CC lib/scsi/scsi_pr.o 00:02:04.497 CC lib/scsi/scsi.o 00:02:04.497 CC lib/scsi/task.o 00:02:04.497 CC lib/scsi/scsi_rpc.o 00:02:04.497 CC lib/ftl/ftl_core.o 00:02:04.497 CC lib/ftl/ftl_debug.o 00:02:04.497 CC lib/ftl/ftl_init.o 00:02:04.497 CC lib/ftl/ftl_io.o 00:02:04.497 CC lib/ftl/ftl_layout.o 00:02:04.497 CC lib/ftl/ftl_l2p.o 00:02:04.497 CC lib/ftl/ftl_sb.o 00:02:04.497 CC lib/ftl/ftl_l2p_flat.o 00:02:04.497 CC lib/ftl/ftl_nv_cache.o 00:02:04.497 CC lib/ftl/ftl_band_ops.o 00:02:04.497 CC lib/ftl/ftl_band.o 00:02:04.497 CC lib/ftl/ftl_rq.o 00:02:04.497 CC lib/ftl/ftl_writer.o 00:02:04.497 CC lib/ftl/ftl_reloc.o 00:02:04.497 CC lib/ftl/ftl_l2p_cache.o 00:02:04.497 CC lib/ftl/ftl_p2l.o 00:02:04.497 CC lib/ftl/mngt/ftl_mngt.o 00:02:04.497 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:04.497 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:04.497 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:04.497 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:04.497 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:04.497 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:04.497 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:04.497 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:04.497 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:04.497 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:04.497 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:04.497 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:04.497 CC lib/ftl/utils/ftl_conf.o 00:02:04.497 CC lib/ftl/utils/ftl_md.o 00:02:04.497 CC lib/ftl/utils/ftl_bitmap.o 00:02:04.497 CC lib/ftl/utils/ftl_property.o 00:02:04.497 CC lib/ftl/utils/ftl_mempool.o 00:02:04.497 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:04.497 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:04.497 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:04.497 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:04.497 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:04.497 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:04.497 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:04.497 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:04.497 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:04.497 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:04.497 CC lib/ftl/base/ftl_base_bdev.o 00:02:04.497 CC lib/ftl/base/ftl_base_dev.o 00:02:04.497 CC lib/ftl/ftl_trace.o 00:02:05.063 LIB libspdk_blobfs.a 00:02:05.063 SO libspdk_blobfs.so.10.0 00:02:05.063 SYMLINK libspdk_blobfs.so 00:02:05.063 LIB libspdk_nbd.a 00:02:05.321 LIB libspdk_scsi.a 00:02:05.321 SO libspdk_nbd.so.7.0 00:02:05.321 SO libspdk_scsi.so.9.0 00:02:05.321 SYMLINK libspdk_nbd.so 00:02:05.321 LIB libspdk_lvol.a 00:02:05.321 SYMLINK libspdk_scsi.so 00:02:05.321 SO libspdk_lvol.so.10.0 00:02:05.321 SYMLINK libspdk_lvol.so 00:02:05.321 LIB libspdk_ublk.a 00:02:05.580 SO libspdk_ublk.so.3.0 00:02:05.580 SYMLINK libspdk_ublk.so 00:02:05.580 CC lib/iscsi/conn.o 00:02:05.580 CC lib/iscsi/init_grp.o 00:02:05.580 CC lib/iscsi/iscsi.o 00:02:05.580 CC lib/iscsi/tgt_node.o 00:02:05.580 CC lib/iscsi/md5.o 00:02:05.580 CC lib/iscsi/portal_grp.o 00:02:05.580 CC lib/iscsi/param.o 00:02:05.580 CC lib/iscsi/iscsi_subsystem.o 00:02:05.580 CC lib/iscsi/iscsi_rpc.o 00:02:05.580 CC lib/vhost/vhost.o 00:02:05.580 CC lib/vhost/vhost_rpc.o 00:02:05.580 CC lib/vhost/vhost_scsi.o 00:02:05.580 CC lib/iscsi/task.o 00:02:05.580 CC lib/vhost/rte_vhost_user.o 00:02:05.580 CC lib/vhost/vhost_blk.o 00:02:05.838 LIB libspdk_ftl.a 00:02:05.838 SO libspdk_ftl.so.9.0 00:02:06.096 SYMLINK libspdk_ftl.so 00:02:06.663 LIB libspdk_vhost.a 00:02:06.663 SO libspdk_vhost.so.8.0 00:02:06.663 SYMLINK libspdk_vhost.so 00:02:06.663 LIB libspdk_nvmf.a 00:02:06.663 SO libspdk_nvmf.so.18.0 00:02:06.921 SYMLINK libspdk_nvmf.so 00:02:07.180 LIB libspdk_iscsi.a 00:02:07.180 SO libspdk_iscsi.so.8.0 00:02:07.437 SYMLINK libspdk_iscsi.so 00:02:07.695 CC module/env_dpdk/env_dpdk_rpc.o 00:02:07.695 CC module/accel/error/accel_error.o 00:02:07.695 CC module/accel/error/accel_error_rpc.o 00:02:07.695 CC module/blob/bdev/blob_bdev.o 00:02:07.695 CC module/accel/ioat/accel_ioat_rpc.o 00:02:07.695 CC module/accel/ioat/accel_ioat.o 00:02:07.695 CC module/keyring/file/keyring_rpc.o 00:02:07.695 CC module/keyring/file/keyring.o 00:02:07.695 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:07.695 CC module/accel/dsa/accel_dsa.o 00:02:07.695 CC module/accel/dsa/accel_dsa_rpc.o 00:02:07.695 CC module/sock/posix/posix.o 00:02:07.695 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:07.695 CC module/accel/iaa/accel_iaa_rpc.o 00:02:07.695 CC module/accel/iaa/accel_iaa.o 00:02:07.695 CC module/scheduler/gscheduler/gscheduler.o 00:02:07.953 LIB libspdk_env_dpdk_rpc.a 00:02:07.953 SO libspdk_env_dpdk_rpc.so.6.0 00:02:07.953 SYMLINK libspdk_env_dpdk_rpc.so 00:02:07.953 LIB libspdk_scheduler_dynamic.a 00:02:07.953 LIB libspdk_scheduler_dpdk_governor.a 00:02:07.953 LIB libspdk_keyring_file.a 00:02:07.953 LIB libspdk_scheduler_gscheduler.a 00:02:07.953 LIB libspdk_accel_error.a 00:02:07.953 SO libspdk_scheduler_dynamic.so.4.0 00:02:07.953 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:07.953 SO libspdk_keyring_file.so.1.0 00:02:07.953 SO libspdk_scheduler_gscheduler.so.4.0 00:02:07.953 LIB libspdk_accel_ioat.a 00:02:07.953 SO libspdk_accel_error.so.2.0 00:02:07.953 LIB libspdk_blob_bdev.a 00:02:07.953 SYMLINK libspdk_scheduler_dynamic.so 00:02:07.953 LIB libspdk_accel_iaa.a 00:02:07.953 SO libspdk_accel_ioat.so.6.0 00:02:07.953 SO libspdk_blob_bdev.so.11.0 00:02:07.953 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:07.953 SYMLINK libspdk_scheduler_gscheduler.so 00:02:07.953 SYMLINK libspdk_keyring_file.so 00:02:07.953 SO libspdk_accel_iaa.so.3.0 00:02:07.953 SYMLINK libspdk_accel_error.so 00:02:07.953 LIB libspdk_accel_dsa.a 00:02:08.211 SO libspdk_accel_dsa.so.5.0 00:02:08.211 SYMLINK libspdk_accel_ioat.so 00:02:08.211 SYMLINK libspdk_blob_bdev.so 00:02:08.211 SYMLINK libspdk_accel_iaa.so 00:02:08.211 SYMLINK libspdk_accel_dsa.so 00:02:08.468 LIB libspdk_sock_posix.a 00:02:08.468 CC module/bdev/null/bdev_null.o 00:02:08.468 CC module/bdev/nvme/bdev_nvme.o 00:02:08.468 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:08.468 CC module/bdev/null/bdev_null_rpc.o 00:02:08.468 CC module/bdev/iscsi/bdev_iscsi.o 00:02:08.468 CC module/bdev/nvme/bdev_mdns_client.o 00:02:08.468 CC module/bdev/nvme/nvme_rpc.o 00:02:08.468 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:08.468 CC module/bdev/nvme/vbdev_opal.o 00:02:08.468 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:08.468 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:08.468 CC module/bdev/delay/vbdev_delay.o 00:02:08.468 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:08.468 CC module/bdev/malloc/bdev_malloc.o 00:02:08.468 CC module/bdev/split/vbdev_split.o 00:02:08.468 CC module/blobfs/bdev/blobfs_bdev.o 00:02:08.468 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:08.468 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:08.468 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:08.468 CC module/bdev/raid/bdev_raid.o 00:02:08.468 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:08.468 CC module/bdev/split/vbdev_split_rpc.o 00:02:08.468 CC module/bdev/raid/bdev_raid_rpc.o 00:02:08.468 CC module/bdev/raid/raid0.o 00:02:08.468 CC module/bdev/raid/bdev_raid_sb.o 00:02:08.468 CC module/bdev/gpt/vbdev_gpt.o 00:02:08.468 CC module/bdev/raid/concat.o 00:02:08.468 CC module/bdev/raid/raid1.o 00:02:08.468 CC module/bdev/gpt/gpt.o 00:02:08.468 CC module/bdev/aio/bdev_aio_rpc.o 00:02:08.468 CC module/bdev/error/vbdev_error.o 00:02:08.468 CC module/bdev/aio/bdev_aio.o 00:02:08.468 CC module/bdev/error/vbdev_error_rpc.o 00:02:08.468 CC module/bdev/lvol/vbdev_lvol.o 00:02:08.468 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:08.468 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:08.468 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:08.468 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:08.468 CC module/bdev/ftl/bdev_ftl.o 00:02:08.468 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:08.468 CC module/bdev/passthru/vbdev_passthru.o 00:02:08.468 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:08.468 SO libspdk_sock_posix.so.6.0 00:02:08.469 SYMLINK libspdk_sock_posix.so 00:02:08.726 LIB libspdk_blobfs_bdev.a 00:02:08.726 SO libspdk_blobfs_bdev.so.6.0 00:02:08.726 LIB libspdk_bdev_error.a 00:02:08.726 LIB libspdk_bdev_delay.a 00:02:08.726 LIB libspdk_bdev_split.a 00:02:08.726 LIB libspdk_bdev_iscsi.a 00:02:08.726 LIB libspdk_bdev_gpt.a 00:02:08.726 SO libspdk_bdev_error.so.6.0 00:02:08.726 LIB libspdk_bdev_null.a 00:02:08.726 SO libspdk_bdev_delay.so.6.0 00:02:08.726 SO libspdk_bdev_iscsi.so.6.0 00:02:08.726 SO libspdk_bdev_split.so.6.0 00:02:08.726 SYMLINK libspdk_blobfs_bdev.so 00:02:08.726 LIB libspdk_bdev_passthru.a 00:02:08.726 SO libspdk_bdev_gpt.so.6.0 00:02:08.726 SO libspdk_bdev_null.so.6.0 00:02:08.726 SO libspdk_bdev_passthru.so.6.0 00:02:08.726 LIB libspdk_bdev_zone_block.a 00:02:08.726 LIB libspdk_bdev_ftl.a 00:02:08.726 SYMLINK libspdk_bdev_error.so 00:02:08.984 SYMLINK libspdk_bdev_delay.so 00:02:08.984 SYMLINK libspdk_bdev_split.so 00:02:08.984 SYMLINK libspdk_bdev_iscsi.so 00:02:08.984 SYMLINK libspdk_bdev_gpt.so 00:02:08.984 SO libspdk_bdev_ftl.so.6.0 00:02:08.984 SYMLINK libspdk_bdev_null.so 00:02:08.984 SO libspdk_bdev_zone_block.so.6.0 00:02:08.984 LIB libspdk_bdev_malloc.a 00:02:08.984 SYMLINK libspdk_bdev_passthru.so 00:02:08.984 LIB libspdk_bdev_aio.a 00:02:08.984 SO libspdk_bdev_malloc.so.6.0 00:02:08.984 SO libspdk_bdev_aio.so.6.0 00:02:08.984 SYMLINK libspdk_bdev_ftl.so 00:02:08.984 SYMLINK libspdk_bdev_zone_block.so 00:02:08.984 SYMLINK libspdk_bdev_malloc.so 00:02:08.984 SYMLINK libspdk_bdev_aio.so 00:02:08.984 LIB libspdk_bdev_lvol.a 00:02:08.984 LIB libspdk_bdev_virtio.a 00:02:08.984 SO libspdk_bdev_lvol.so.6.0 00:02:08.984 SO libspdk_bdev_virtio.so.6.0 00:02:09.244 SYMLINK libspdk_bdev_virtio.so 00:02:09.244 SYMLINK libspdk_bdev_lvol.so 00:02:09.504 LIB libspdk_bdev_raid.a 00:02:09.504 SO libspdk_bdev_raid.so.6.0 00:02:09.504 SYMLINK libspdk_bdev_raid.so 00:02:10.878 LIB libspdk_bdev_nvme.a 00:02:10.878 SO libspdk_bdev_nvme.so.7.0 00:02:10.878 SYMLINK libspdk_bdev_nvme.so 00:02:11.137 CC module/event/subsystems/keyring/keyring.o 00:02:11.137 CC module/event/subsystems/iobuf/iobuf.o 00:02:11.137 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:11.137 CC module/event/subsystems/sock/sock.o 00:02:11.137 CC module/event/subsystems/vmd/vmd.o 00:02:11.137 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:11.137 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:11.137 CC module/event/subsystems/scheduler/scheduler.o 00:02:11.137 LIB libspdk_event_sock.a 00:02:11.395 LIB libspdk_event_keyring.a 00:02:11.395 SO libspdk_event_sock.so.5.0 00:02:11.395 LIB libspdk_event_vhost_blk.a 00:02:11.395 SO libspdk_event_keyring.so.1.0 00:02:11.395 LIB libspdk_event_vmd.a 00:02:11.395 LIB libspdk_event_iobuf.a 00:02:11.395 LIB libspdk_event_scheduler.a 00:02:11.395 SO libspdk_event_vhost_blk.so.3.0 00:02:11.395 SO libspdk_event_scheduler.so.4.0 00:02:11.395 SO libspdk_event_vmd.so.6.0 00:02:11.395 SO libspdk_event_iobuf.so.3.0 00:02:11.395 SYMLINK libspdk_event_sock.so 00:02:11.395 SYMLINK libspdk_event_keyring.so 00:02:11.395 SYMLINK libspdk_event_vhost_blk.so 00:02:11.395 SYMLINK libspdk_event_scheduler.so 00:02:11.395 SYMLINK libspdk_event_iobuf.so 00:02:11.395 SYMLINK libspdk_event_vmd.so 00:02:11.653 CC module/event/subsystems/accel/accel.o 00:02:11.653 LIB libspdk_event_accel.a 00:02:11.653 SO libspdk_event_accel.so.6.0 00:02:11.653 SYMLINK libspdk_event_accel.so 00:02:11.911 CC module/event/subsystems/bdev/bdev.o 00:02:12.168 LIB libspdk_event_bdev.a 00:02:12.168 SO libspdk_event_bdev.so.6.0 00:02:12.168 SYMLINK libspdk_event_bdev.so 00:02:12.426 CC module/event/subsystems/scsi/scsi.o 00:02:12.426 CC module/event/subsystems/nbd/nbd.o 00:02:12.426 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:12.426 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:12.426 CC module/event/subsystems/ublk/ublk.o 00:02:12.426 LIB libspdk_event_nbd.a 00:02:12.426 LIB libspdk_event_scsi.a 00:02:12.426 LIB libspdk_event_ublk.a 00:02:12.426 SO libspdk_event_scsi.so.6.0 00:02:12.426 SO libspdk_event_nbd.so.6.0 00:02:12.426 SO libspdk_event_ublk.so.3.0 00:02:12.685 SYMLINK libspdk_event_scsi.so 00:02:12.685 SYMLINK libspdk_event_nbd.so 00:02:12.685 SYMLINK libspdk_event_ublk.so 00:02:12.685 LIB libspdk_event_nvmf.a 00:02:12.685 SO libspdk_event_nvmf.so.6.0 00:02:12.685 SYMLINK libspdk_event_nvmf.so 00:02:12.945 CC module/event/subsystems/iscsi/iscsi.o 00:02:12.945 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:12.945 LIB libspdk_event_vhost_scsi.a 00:02:12.945 LIB libspdk_event_iscsi.a 00:02:12.945 SO libspdk_event_vhost_scsi.so.3.0 00:02:12.945 SO libspdk_event_iscsi.so.6.0 00:02:12.945 SYMLINK libspdk_event_iscsi.so 00:02:13.203 SYMLINK libspdk_event_vhost_scsi.so 00:02:13.203 SO libspdk.so.6.0 00:02:13.203 SYMLINK libspdk.so 00:02:13.461 CC app/spdk_nvme_perf/perf.o 00:02:13.461 CC app/trace_record/trace_record.o 00:02:13.461 CC app/spdk_nvme_identify/identify.o 00:02:13.461 CXX app/trace/trace.o 00:02:13.461 CC app/spdk_top/spdk_top.o 00:02:13.461 CC app/spdk_nvme_discover/discovery_aer.o 00:02:13.461 CC app/spdk_lspci/spdk_lspci.o 00:02:13.461 CC app/iscsi_tgt/iscsi_tgt.o 00:02:13.461 TEST_HEADER include/spdk/accel.h 00:02:13.461 TEST_HEADER include/spdk/accel_module.h 00:02:13.461 CC app/nvmf_tgt/nvmf_main.o 00:02:13.461 TEST_HEADER include/spdk/base64.h 00:02:13.461 TEST_HEADER include/spdk/barrier.h 00:02:13.461 TEST_HEADER include/spdk/bdev.h 00:02:13.461 TEST_HEADER include/spdk/assert.h 00:02:13.461 TEST_HEADER include/spdk/bdev_module.h 00:02:13.461 CC test/rpc_client/rpc_client_test.o 00:02:13.461 TEST_HEADER include/spdk/bit_pool.h 00:02:13.461 CC app/spdk_dd/spdk_dd.o 00:02:13.461 TEST_HEADER include/spdk/blob_bdev.h 00:02:13.461 TEST_HEADER include/spdk/bdev_zone.h 00:02:13.461 TEST_HEADER include/spdk/bit_array.h 00:02:13.461 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:13.461 CC app/vhost/vhost.o 00:02:13.461 CC app/spdk_tgt/spdk_tgt.o 00:02:13.461 TEST_HEADER include/spdk/blobfs.h 00:02:13.461 TEST_HEADER include/spdk/conf.h 00:02:13.461 TEST_HEADER include/spdk/config.h 00:02:13.461 TEST_HEADER include/spdk/cpuset.h 00:02:13.461 TEST_HEADER include/spdk/crc16.h 00:02:13.461 TEST_HEADER include/spdk/blob.h 00:02:13.461 TEST_HEADER include/spdk/crc64.h 00:02:13.461 TEST_HEADER include/spdk/dif.h 00:02:13.461 TEST_HEADER include/spdk/crc32.h 00:02:13.461 TEST_HEADER include/spdk/endian.h 00:02:13.461 TEST_HEADER include/spdk/env.h 00:02:13.461 TEST_HEADER include/spdk/dma.h 00:02:13.461 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:13.461 TEST_HEADER include/spdk/env_dpdk.h 00:02:13.461 TEST_HEADER include/spdk/fd_group.h 00:02:13.461 TEST_HEADER include/spdk/event.h 00:02:13.461 TEST_HEADER include/spdk/file.h 00:02:13.461 TEST_HEADER include/spdk/fd.h 00:02:13.461 TEST_HEADER include/spdk/gpt_spec.h 00:02:13.461 TEST_HEADER include/spdk/ftl.h 00:02:13.461 TEST_HEADER include/spdk/hexlify.h 00:02:13.461 TEST_HEADER include/spdk/histogram_data.h 00:02:13.461 TEST_HEADER include/spdk/init.h 00:02:13.461 TEST_HEADER include/spdk/idxd_spec.h 00:02:13.461 TEST_HEADER include/spdk/idxd.h 00:02:13.461 TEST_HEADER include/spdk/ioat_spec.h 00:02:13.461 TEST_HEADER include/spdk/ioat.h 00:02:13.461 TEST_HEADER include/spdk/json.h 00:02:13.461 TEST_HEADER include/spdk/iscsi_spec.h 00:02:13.461 TEST_HEADER include/spdk/keyring.h 00:02:13.461 TEST_HEADER include/spdk/jsonrpc.h 00:02:13.461 TEST_HEADER include/spdk/keyring_module.h 00:02:13.722 TEST_HEADER include/spdk/likely.h 00:02:13.722 TEST_HEADER include/spdk/log.h 00:02:13.722 TEST_HEADER include/spdk/lvol.h 00:02:13.722 TEST_HEADER include/spdk/mmio.h 00:02:13.722 TEST_HEADER include/spdk/memory.h 00:02:13.722 TEST_HEADER include/spdk/notify.h 00:02:13.722 TEST_HEADER include/spdk/nvme.h 00:02:13.722 TEST_HEADER include/spdk/nbd.h 00:02:13.722 TEST_HEADER include/spdk/nvme_intel.h 00:02:13.722 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:13.722 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:13.722 TEST_HEADER include/spdk/nvme_spec.h 00:02:13.722 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:13.722 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:13.722 TEST_HEADER include/spdk/nvme_zns.h 00:02:13.722 TEST_HEADER include/spdk/nvmf.h 00:02:13.722 TEST_HEADER include/spdk/nvmf_transport.h 00:02:13.722 TEST_HEADER include/spdk/nvmf_spec.h 00:02:13.722 TEST_HEADER include/spdk/opal_spec.h 00:02:13.722 TEST_HEADER include/spdk/opal.h 00:02:13.722 TEST_HEADER include/spdk/queue.h 00:02:13.722 TEST_HEADER include/spdk/pipe.h 00:02:13.722 TEST_HEADER include/spdk/pci_ids.h 00:02:13.722 TEST_HEADER include/spdk/reduce.h 00:02:13.722 TEST_HEADER include/spdk/scheduler.h 00:02:13.722 TEST_HEADER include/spdk/rpc.h 00:02:13.722 TEST_HEADER include/spdk/scsi.h 00:02:13.722 TEST_HEADER include/spdk/scsi_spec.h 00:02:13.722 TEST_HEADER include/spdk/stdinc.h 00:02:13.722 TEST_HEADER include/spdk/string.h 00:02:13.722 TEST_HEADER include/spdk/thread.h 00:02:13.722 TEST_HEADER include/spdk/sock.h 00:02:13.722 TEST_HEADER include/spdk/trace_parser.h 00:02:13.722 TEST_HEADER include/spdk/trace.h 00:02:13.722 TEST_HEADER include/spdk/tree.h 00:02:13.722 TEST_HEADER include/spdk/ublk.h 00:02:13.722 TEST_HEADER include/spdk/util.h 00:02:13.722 TEST_HEADER include/spdk/uuid.h 00:02:13.722 TEST_HEADER include/spdk/version.h 00:02:13.722 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:13.722 TEST_HEADER include/spdk/vhost.h 00:02:13.722 TEST_HEADER include/spdk/vmd.h 00:02:13.722 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:13.722 TEST_HEADER include/spdk/xor.h 00:02:13.722 TEST_HEADER include/spdk/zipf.h 00:02:13.722 CXX test/cpp_headers/accel.o 00:02:13.722 CXX test/cpp_headers/assert.o 00:02:13.722 CXX test/cpp_headers/accel_module.o 00:02:13.722 CXX test/cpp_headers/barrier.o 00:02:13.722 CXX test/cpp_headers/base64.o 00:02:13.722 CXX test/cpp_headers/bdev.o 00:02:13.722 CXX test/cpp_headers/bdev_module.o 00:02:13.722 CXX test/cpp_headers/bit_pool.o 00:02:13.722 CXX test/cpp_headers/bit_array.o 00:02:13.722 CXX test/cpp_headers/blob_bdev.o 00:02:13.722 CXX test/cpp_headers/blobfs_bdev.o 00:02:13.722 CXX test/cpp_headers/bdev_zone.o 00:02:13.722 CC examples/accel/perf/accel_perf.o 00:02:13.722 CXX test/cpp_headers/blobfs.o 00:02:13.722 CXX test/cpp_headers/conf.o 00:02:13.722 CXX test/cpp_headers/blob.o 00:02:13.722 CXX test/cpp_headers/crc32.o 00:02:13.722 CXX test/cpp_headers/cpuset.o 00:02:13.722 CXX test/cpp_headers/crc16.o 00:02:13.722 CXX test/cpp_headers/config.o 00:02:13.722 CXX test/cpp_headers/dif.o 00:02:13.722 CXX test/cpp_headers/dma.o 00:02:13.722 CXX test/cpp_headers/crc64.o 00:02:13.722 CXX test/cpp_headers/env.o 00:02:13.722 CXX test/cpp_headers/event.o 00:02:13.722 CXX test/cpp_headers/endian.o 00:02:13.722 CXX test/cpp_headers/fd_group.o 00:02:13.722 CXX test/cpp_headers/env_dpdk.o 00:02:13.722 CC examples/ioat/verify/verify.o 00:02:13.722 CC test/env/vtophys/vtophys.o 00:02:13.722 CC app/fio/nvme/fio_plugin.o 00:02:13.722 CXX test/cpp_headers/ftl.o 00:02:13.722 CXX test/cpp_headers/file.o 00:02:13.722 CXX test/cpp_headers/fd.o 00:02:13.722 CC examples/ioat/perf/perf.o 00:02:13.722 CXX test/cpp_headers/hexlify.o 00:02:13.722 CXX test/cpp_headers/gpt_spec.o 00:02:13.722 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:13.722 CXX test/cpp_headers/idxd_spec.o 00:02:13.722 CXX test/cpp_headers/idxd.o 00:02:13.722 CXX test/cpp_headers/histogram_data.o 00:02:13.722 CC test/env/pci/pci_ut.o 00:02:13.722 CC examples/sock/hello_world/hello_sock.o 00:02:13.722 CXX test/cpp_headers/ioat.o 00:02:13.722 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:13.722 CXX test/cpp_headers/init.o 00:02:13.722 CC examples/nvme/abort/abort.o 00:02:13.722 CC examples/nvme/hotplug/hotplug.o 00:02:13.722 CXX test/cpp_headers/json.o 00:02:13.722 CXX test/cpp_headers/jsonrpc.o 00:02:13.722 CXX test/cpp_headers/keyring_module.o 00:02:13.722 CXX test/cpp_headers/iscsi_spec.o 00:02:13.722 CXX test/cpp_headers/ioat_spec.o 00:02:13.722 CXX test/cpp_headers/likely.o 00:02:13.722 CXX test/cpp_headers/log.o 00:02:13.722 CXX test/cpp_headers/keyring.o 00:02:13.722 CXX test/cpp_headers/lvol.o 00:02:13.722 CC test/env/memory/memory_ut.o 00:02:13.722 CC examples/nvme/hello_world/hello_world.o 00:02:13.722 CXX test/cpp_headers/mmio.o 00:02:13.722 CC test/nvme/sgl/sgl.o 00:02:13.722 CXX test/cpp_headers/memory.o 00:02:13.722 CXX test/cpp_headers/nbd.o 00:02:13.722 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:13.722 CC examples/vmd/led/led.o 00:02:13.722 CXX test/cpp_headers/notify.o 00:02:13.722 CC examples/vmd/lsvmd/lsvmd.o 00:02:13.722 CXX test/cpp_headers/nvme.o 00:02:13.722 LINK spdk_lspci 00:02:13.722 CC test/event/app_repeat/app_repeat.o 00:02:13.722 CC examples/nvme/arbitration/arbitration.o 00:02:13.722 CC test/nvme/cuse/cuse.o 00:02:13.722 CC test/thread/poller_perf/poller_perf.o 00:02:13.722 CC test/event/event_perf/event_perf.o 00:02:13.722 CC examples/nvme/reconnect/reconnect.o 00:02:13.722 CC test/nvme/boot_partition/boot_partition.o 00:02:13.722 CC examples/util/zipf/zipf.o 00:02:13.722 CC test/nvme/startup/startup.o 00:02:13.722 CC examples/blob/cli/blobcli.o 00:02:13.722 CC test/app/stub/stub.o 00:02:13.722 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:13.722 CC examples/idxd/perf/perf.o 00:02:13.722 CC test/nvme/overhead/overhead.o 00:02:13.722 CC test/nvme/e2edp/nvme_dp.o 00:02:13.722 CC test/nvme/fused_ordering/fused_ordering.o 00:02:13.722 CC test/app/jsoncat/jsoncat.o 00:02:13.722 CC test/nvme/simple_copy/simple_copy.o 00:02:13.981 CC examples/bdev/hello_world/hello_bdev.o 00:02:13.981 CC test/nvme/err_injection/err_injection.o 00:02:13.981 CC test/event/reactor_perf/reactor_perf.o 00:02:13.981 CC test/event/reactor/reactor.o 00:02:13.981 CXX test/cpp_headers/nvme_intel.o 00:02:13.981 CC test/nvme/reserve/reserve.o 00:02:13.981 CC examples/thread/thread/thread_ex.o 00:02:13.981 CC test/app/histogram_perf/histogram_perf.o 00:02:13.981 CC examples/blob/hello_world/hello_blob.o 00:02:13.981 CC app/fio/bdev/fio_plugin.o 00:02:13.981 CC test/nvme/reset/reset.o 00:02:13.981 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:13.981 CC test/accel/dif/dif.o 00:02:13.981 CC test/event/scheduler/scheduler.o 00:02:13.981 CC test/app/bdev_svc/bdev_svc.o 00:02:13.981 CC test/dma/test_dma/test_dma.o 00:02:13.981 CC test/nvme/fdp/fdp.o 00:02:13.981 CC test/nvme/connect_stress/connect_stress.o 00:02:13.981 CXX test/cpp_headers/nvme_ocssd.o 00:02:13.981 CC test/nvme/aer/aer.o 00:02:13.981 CC examples/bdev/bdevperf/bdevperf.o 00:02:13.981 CC test/blobfs/mkfs/mkfs.o 00:02:13.981 CC test/nvme/compliance/nvme_compliance.o 00:02:13.981 LINK spdk_nvme_discover 00:02:13.981 CC examples/nvmf/nvmf/nvmf.o 00:02:13.981 CC test/bdev/bdevio/bdevio.o 00:02:14.250 LINK nvmf_tgt 00:02:14.250 LINK spdk_tgt 00:02:14.250 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:14.250 CC test/lvol/esnap/esnap.o 00:02:14.250 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:14.250 CC test/env/mem_callbacks/mem_callbacks.o 00:02:14.250 LINK pmr_persistence 00:02:14.518 LINK interrupt_tgt 00:02:14.518 LINK vhost 00:02:14.518 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:14.518 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:14.518 LINK iscsi_tgt 00:02:14.518 LINK event_perf 00:02:14.518 LINK app_repeat 00:02:14.518 LINK ioat_perf 00:02:14.518 LINK rpc_client_test 00:02:14.518 LINK spdk_dd 00:02:14.518 LINK startup 00:02:14.519 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:14.519 LINK bdev_svc 00:02:14.519 LINK reactor_perf 00:02:14.519 LINK hello_sock 00:02:14.519 LINK lsvmd 00:02:14.519 LINK err_injection 00:02:14.519 CXX test/cpp_headers/nvme_spec.o 00:02:14.519 LINK scheduler 00:02:14.519 CXX test/cpp_headers/nvme_zns.o 00:02:14.519 CXX test/cpp_headers/nvmf_cmd.o 00:02:14.519 LINK poller_perf 00:02:14.519 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:14.519 CXX test/cpp_headers/nvmf.o 00:02:14.519 CXX test/cpp_headers/opal.o 00:02:14.519 CXX test/cpp_headers/opal_spec.o 00:02:14.519 CXX test/cpp_headers/nvmf_spec.o 00:02:14.519 LINK led 00:02:14.519 CXX test/cpp_headers/nvmf_transport.o 00:02:14.519 CXX test/cpp_headers/pci_ids.o 00:02:14.519 LINK hotplug 00:02:14.519 CXX test/cpp_headers/pipe.o 00:02:14.519 CXX test/cpp_headers/queue.o 00:02:14.519 LINK jsoncat 00:02:14.780 LINK vtophys 00:02:14.780 CXX test/cpp_headers/reduce.o 00:02:14.780 CXX test/cpp_headers/rpc.o 00:02:14.780 LINK histogram_perf 00:02:14.780 CXX test/cpp_headers/scheduler.o 00:02:14.780 CXX test/cpp_headers/scsi.o 00:02:14.780 CXX test/cpp_headers/sock.o 00:02:14.780 CXX test/cpp_headers/scsi_spec.o 00:02:14.780 LINK sgl 00:02:14.780 CXX test/cpp_headers/stdinc.o 00:02:14.780 CXX test/cpp_headers/string.o 00:02:14.780 CXX test/cpp_headers/thread.o 00:02:14.780 CXX test/cpp_headers/trace.o 00:02:14.780 CXX test/cpp_headers/trace_parser.o 00:02:14.780 CXX test/cpp_headers/tree.o 00:02:14.780 CXX test/cpp_headers/ublk.o 00:02:14.780 LINK thread 00:02:14.780 CXX test/cpp_headers/util.o 00:02:14.780 LINK stub 00:02:14.780 CXX test/cpp_headers/uuid.o 00:02:14.780 LINK arbitration 00:02:14.780 LINK spdk_trace_record 00:02:14.780 CXX test/cpp_headers/version.o 00:02:14.780 CXX test/cpp_headers/vfio_user_pci.o 00:02:14.780 CXX test/cpp_headers/vfio_user_spec.o 00:02:14.780 LINK env_dpdk_post_init 00:02:14.780 CXX test/cpp_headers/vhost.o 00:02:14.780 LINK reset 00:02:14.780 CXX test/cpp_headers/vmd.o 00:02:14.780 CXX test/cpp_headers/zipf.o 00:02:14.780 CXX test/cpp_headers/xor.o 00:02:14.780 LINK reactor 00:02:14.780 LINK spdk_trace 00:02:14.780 LINK boot_partition 00:02:14.780 LINK zipf 00:02:14.780 LINK fdp 00:02:14.780 LINK abort 00:02:14.780 LINK cmb_copy 00:02:14.780 LINK doorbell_aers 00:02:14.780 LINK connect_stress 00:02:14.780 LINK reconnect 00:02:15.037 LINK nvme_compliance 00:02:15.037 LINK hello_bdev 00:02:15.037 LINK dif 00:02:15.037 LINK test_dma 00:02:15.037 LINK simple_copy 00:02:15.037 LINK verify 00:02:15.037 LINK mkfs 00:02:15.037 LINK accel_perf 00:02:15.037 LINK hello_blob 00:02:15.037 LINK hello_world 00:02:15.037 LINK reserve 00:02:15.037 LINK fused_ordering 00:02:15.037 LINK pci_ut 00:02:15.037 LINK nvme_manage 00:02:15.037 LINK idxd_perf 00:02:15.037 LINK nvme_dp 00:02:15.037 LINK overhead 00:02:15.037 LINK aer 00:02:15.296 LINK nvmf 00:02:15.296 LINK nvme_fuzz 00:02:15.296 LINK vhost_fuzz 00:02:15.296 LINK mem_callbacks 00:02:15.296 LINK spdk_bdev 00:02:15.296 LINK blobcli 00:02:15.296 LINK bdevio 00:02:15.296 LINK spdk_nvme 00:02:15.554 LINK memory_ut 00:02:15.554 LINK spdk_nvme_perf 00:02:15.554 LINK spdk_nvme_identify 00:02:15.554 LINK cuse 00:02:15.554 LINK bdevperf 00:02:15.554 LINK spdk_top 00:02:16.120 LINK iscsi_fuzz 00:02:18.644 LINK esnap 00:02:18.644 00:02:18.644 real 0m37.758s 00:02:18.644 user 5m54.979s 00:02:18.644 sys 5m3.903s 00:02:18.644 21:07:33 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:18.644 21:07:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.644 ************************************ 00:02:18.644 END TEST make 00:02:18.644 ************************************ 00:02:18.644 21:07:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:18.644 21:07:33 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:18.644 21:07:33 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:18.644 21:07:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.644 21:07:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:18.644 21:07:33 -- pm/common@45 -- $ pid=886537 00:02:18.644 21:07:33 -- pm/common@52 -- $ sudo kill -TERM 886537 00:02:18.644 21:07:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.644 21:07:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:18.644 21:07:33 -- pm/common@45 -- $ pid=886533 00:02:18.644 21:07:33 -- pm/common@52 -- $ sudo kill -TERM 886533 00:02:18.644 21:07:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.644 21:07:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:18.644 21:07:33 -- pm/common@45 -- $ pid=886539 00:02:18.644 21:07:33 -- pm/common@52 -- $ sudo kill -TERM 886539 00:02:18.644 21:07:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.644 21:07:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:18.644 21:07:33 -- pm/common@45 -- $ pid=886536 00:02:18.644 21:07:33 -- pm/common@52 -- $ sudo kill -TERM 886536 00:02:18.644 21:07:33 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:02:18.644 21:07:33 -- nvmf/common.sh@7 -- # uname -s 00:02:18.644 21:07:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:18.644 21:07:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:18.644 21:07:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:18.644 21:07:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:18.644 21:07:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:18.644 21:07:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:18.644 21:07:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:18.644 21:07:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:18.644 21:07:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:18.644 21:07:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:18.644 21:07:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:02:18.644 21:07:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:02:18.644 21:07:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:18.645 21:07:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:18.645 21:07:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:18.645 21:07:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:18.645 21:07:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:02:18.645 21:07:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:18.645 21:07:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:18.645 21:07:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:18.645 21:07:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.645 21:07:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.645 21:07:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.645 21:07:33 -- paths/export.sh@5 -- # export PATH 00:02:18.645 21:07:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.645 21:07:33 -- nvmf/common.sh@47 -- # : 0 00:02:18.645 21:07:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:18.645 21:07:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:18.645 21:07:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:18.645 21:07:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:18.645 21:07:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:18.645 21:07:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:18.645 21:07:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:18.645 21:07:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:18.645 21:07:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:18.645 21:07:33 -- spdk/autotest.sh@32 -- # uname -s 00:02:18.645 21:07:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:18.645 21:07:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:18.645 21:07:33 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:02:18.645 21:07:33 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:18.645 21:07:33 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:02:18.645 21:07:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:18.645 21:07:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:18.645 21:07:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:18.645 21:07:33 -- spdk/autotest.sh@48 -- # udevadm_pid=945543 00:02:18.645 21:07:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:18.645 21:07:33 -- pm/common@17 -- # local monitor 00:02:18.645 21:07:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.645 21:07:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:18.645 21:07:33 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=945544 00:02:18.645 21:07:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.645 21:07:33 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=945545 00:02:18.645 21:07:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.645 21:07:33 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=945546 00:02:18.645 21:07:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.645 21:07:33 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=945550 00:02:18.645 21:07:33 -- pm/common@26 -- # sleep 1 00:02:18.645 21:07:33 -- pm/common@21 -- # date +%s 00:02:18.645 21:07:33 -- pm/common@21 -- # date +%s 00:02:18.645 21:07:33 -- pm/common@21 -- # date +%s 00:02:18.645 21:07:33 -- pm/common@21 -- # date +%s 00:02:18.645 21:07:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713985653 00:02:18.645 21:07:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713985653 00:02:18.645 21:07:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713985653 00:02:18.645 21:07:33 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713985653 00:02:18.645 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713985653_collect-vmstat.pm.log 00:02:18.645 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713985653_collect-cpu-temp.pm.log 00:02:18.645 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713985653_collect-bmc-pm.bmc.pm.log 00:02:18.645 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713985653_collect-cpu-load.pm.log 00:02:19.578 21:07:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:19.578 21:07:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:19.578 21:07:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:19.578 21:07:34 -- common/autotest_common.sh@10 -- # set +x 00:02:19.578 21:07:34 -- spdk/autotest.sh@59 -- # create_test_list 00:02:19.578 21:07:34 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:19.578 21:07:34 -- common/autotest_common.sh@10 -- # set +x 00:02:19.578 21:07:34 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/autotest.sh 00:02:19.578 21:07:34 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:19.578 21:07:34 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:19.578 21:07:34 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:02:19.578 21:07:34 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:19.578 21:07:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:19.578 21:07:34 -- common/autotest_common.sh@1441 -- # uname 00:02:19.578 21:07:34 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:19.578 21:07:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:19.578 21:07:34 -- common/autotest_common.sh@1461 -- # uname 00:02:19.578 21:07:34 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:19.578 21:07:34 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:19.578 21:07:34 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:19.578 21:07:34 -- spdk/autotest.sh@72 -- # hash lcov 00:02:19.578 21:07:34 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:19.578 21:07:34 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:19.578 --rc lcov_branch_coverage=1 00:02:19.578 --rc lcov_function_coverage=1 00:02:19.578 --rc genhtml_branch_coverage=1 00:02:19.578 --rc genhtml_function_coverage=1 00:02:19.578 --rc genhtml_legend=1 00:02:19.578 --rc geninfo_all_blocks=1 00:02:19.578 ' 00:02:19.578 21:07:34 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:19.578 --rc lcov_branch_coverage=1 00:02:19.578 --rc lcov_function_coverage=1 00:02:19.578 --rc genhtml_branch_coverage=1 00:02:19.578 --rc genhtml_function_coverage=1 00:02:19.578 --rc genhtml_legend=1 00:02:19.578 --rc geninfo_all_blocks=1 00:02:19.578 ' 00:02:19.578 21:07:34 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:19.578 --rc lcov_branch_coverage=1 00:02:19.578 --rc lcov_function_coverage=1 00:02:19.578 --rc genhtml_branch_coverage=1 00:02:19.578 --rc genhtml_function_coverage=1 00:02:19.578 --rc genhtml_legend=1 00:02:19.578 --rc geninfo_all_blocks=1 00:02:19.578 --no-external' 00:02:19.578 21:07:34 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:19.578 --rc lcov_branch_coverage=1 00:02:19.578 --rc lcov_function_coverage=1 00:02:19.578 --rc genhtml_branch_coverage=1 00:02:19.578 --rc genhtml_function_coverage=1 00:02:19.578 --rc genhtml_legend=1 00:02:19.578 --rc geninfo_all_blocks=1 00:02:19.578 --no-external' 00:02:19.578 21:07:34 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:19.836 lcov: LCOV version 1.14 00:02:19.836 21:07:34 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/dsa-phy-autotest/spdk -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:24.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:24.115 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:24.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:24.116 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:24.117 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:24.117 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:24.117 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:24.117 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:24.117 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:26.017 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:26.017 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:31.279 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:31.279 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:31.279 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:31.279 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:31.279 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:31.279 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:35.464 21:07:49 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:35.464 21:07:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:35.464 21:07:49 -- common/autotest_common.sh@10 -- # set +x 00:02:35.464 21:07:49 -- spdk/autotest.sh@91 -- # rm -f 00:02:35.464 21:07:49 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:38.004 0000:c9:00.0 (8086 0a54): Already using the nvme driver 00:02:38.005 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:02:38.005 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:02:38.005 0000:cb:00.0 (8086 0a54): Already using the nvme driver 00:02:38.005 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:02:38.005 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:02:38.005 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:02:38.005 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:02:38.005 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:02:38.005 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:02:38.005 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:02:38.005 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:02:38.005 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:02:38.005 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:02:38.005 0000:ca:00.0 (8086 0a54): Already using the nvme driver 00:02:38.005 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:02:38.005 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:02:38.005 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:02:38.005 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:02:38.264 21:07:53 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:38.264 21:07:53 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:38.264 21:07:53 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:38.264 21:07:53 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:38.264 21:07:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:38.264 21:07:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:38.264 21:07:53 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:38.264 21:07:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:38.264 21:07:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:38.264 21:07:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:38.264 21:07:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:02:38.264 21:07:53 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:02:38.264 21:07:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:38.264 21:07:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:38.264 21:07:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:38.264 21:07:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:02:38.264 21:07:53 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:02:38.264 21:07:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:38.264 21:07:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:38.264 21:07:53 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:38.264 21:07:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:38.264 21:07:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:38.264 21:07:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:38.264 21:07:53 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:38.264 21:07:53 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:38.264 No valid GPT data, bailing 00:02:38.264 21:07:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:38.264 21:07:53 -- scripts/common.sh@391 -- # pt= 00:02:38.264 21:07:53 -- scripts/common.sh@392 -- # return 1 00:02:38.264 21:07:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:38.524 1+0 records in 00:02:38.524 1+0 records out 00:02:38.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00410307 s, 256 MB/s 00:02:38.524 21:07:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:38.524 21:07:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:38.524 21:07:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:02:38.524 21:07:53 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:02:38.524 21:07:53 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:38.524 No valid GPT data, bailing 00:02:38.524 21:07:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:38.524 21:07:53 -- scripts/common.sh@391 -- # pt= 00:02:38.524 21:07:53 -- scripts/common.sh@392 -- # return 1 00:02:38.524 21:07:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:38.524 1+0 records in 00:02:38.524 1+0 records out 00:02:38.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00351482 s, 298 MB/s 00:02:38.524 21:07:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:38.524 21:07:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:38.524 21:07:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:02:38.524 21:07:53 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:02:38.524 21:07:53 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:02:38.524 No valid GPT data, bailing 00:02:38.524 21:07:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:02:38.524 21:07:53 -- scripts/common.sh@391 -- # pt= 00:02:38.524 21:07:53 -- scripts/common.sh@392 -- # return 1 00:02:38.524 21:07:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:02:38.524 1+0 records in 00:02:38.524 1+0 records out 00:02:38.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00295248 s, 355 MB/s 00:02:38.524 21:07:53 -- spdk/autotest.sh@118 -- # sync 00:02:38.524 21:07:53 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:38.524 21:07:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:38.524 21:07:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:43.803 21:07:58 -- spdk/autotest.sh@124 -- # uname -s 00:02:43.803 21:07:58 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:43.803 21:07:58 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:43.803 21:07:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:43.803 21:07:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:43.803 21:07:58 -- common/autotest_common.sh@10 -- # set +x 00:02:43.803 ************************************ 00:02:43.803 START TEST setup.sh 00:02:43.803 ************************************ 00:02:43.803 21:07:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:44.064 * Looking for test storage... 00:02:44.064 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:44.064 21:07:58 -- setup/test-setup.sh@10 -- # uname -s 00:02:44.064 21:07:58 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:44.064 21:07:58 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:44.064 21:07:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:44.064 21:07:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:44.064 21:07:58 -- common/autotest_common.sh@10 -- # set +x 00:02:44.064 ************************************ 00:02:44.064 START TEST acl 00:02:44.064 ************************************ 00:02:44.064 21:07:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:44.064 * Looking for test storage... 00:02:44.064 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:44.064 21:07:59 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:44.064 21:07:59 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:44.064 21:07:59 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:44.064 21:07:59 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:44.064 21:07:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:44.064 21:07:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:44.064 21:07:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:44.064 21:07:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:44.064 21:07:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:44.064 21:07:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:44.064 21:07:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:02:44.064 21:07:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:02:44.064 21:07:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:44.064 21:07:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:44.064 21:07:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:44.064 21:07:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:02:44.064 21:07:59 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:02:44.064 21:07:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:44.064 21:07:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:44.064 21:07:59 -- setup/acl.sh@12 -- # devs=() 00:02:44.064 21:07:59 -- setup/acl.sh@12 -- # declare -a devs 00:02:44.064 21:07:59 -- setup/acl.sh@13 -- # drivers=() 00:02:44.064 21:07:59 -- setup/acl.sh@13 -- # declare -A drivers 00:02:44.064 21:07:59 -- setup/acl.sh@51 -- # setup reset 00:02:44.064 21:07:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:44.064 21:07:59 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.361 21:08:02 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:47.361 21:08:02 -- setup/acl.sh@16 -- # local dev driver 00:02:47.361 21:08:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:47.361 21:08:02 -- setup/acl.sh@15 -- # setup output status 00:02:47.361 21:08:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.361 21:08:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:02:49.910 Hugepages 00:02:49.910 node hugesize free / total 00:02:49.910 21:08:04 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:49.910 21:08:04 -- setup/acl.sh@19 -- # continue 00:02:49.910 21:08:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.910 21:08:04 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:49.910 21:08:04 -- setup/acl.sh@19 -- # continue 00:02:49.910 21:08:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.910 21:08:04 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:49.910 21:08:04 -- setup/acl.sh@19 -- # continue 00:02:49.910 21:08:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.910 00:02:49.910 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:49.910 21:08:04 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:49.910 21:08:04 -- setup/acl.sh@19 -- # continue 00:02:49.910 21:08:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.910 21:08:04 -- setup/acl.sh@19 -- # [[ 0000:6a:01.0 == *:*:*.* ]] 00:02:49.910 21:08:04 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:49.910 21:08:04 -- setup/acl.sh@20 -- # continue 00:02:49.910 21:08:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.910 21:08:04 -- setup/acl.sh@19 -- # [[ 0000:6a:02.0 == *:*:*.* ]] 00:02:49.910 21:08:04 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:49.910 21:08:04 -- setup/acl.sh@20 -- # continue 00:02:49.910 21:08:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.910 21:08:04 -- setup/acl.sh@19 -- # [[ 0000:6f:01.0 == *:*:*.* ]] 00:02:49.910 21:08:04 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:49.910 21:08:04 -- setup/acl.sh@20 -- # continue 00:02:49.910 21:08:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.910 21:08:04 -- setup/acl.sh@19 -- # [[ 0000:6f:02.0 == *:*:*.* ]] 00:02:49.910 21:08:04 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:49.910 21:08:04 -- setup/acl.sh@20 -- # continue 00:02:49.910 21:08:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.910 21:08:04 -- setup/acl.sh@19 -- # [[ 0000:74:01.0 == *:*:*.* ]] 00:02:49.910 21:08:04 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:49.910 21:08:04 -- setup/acl.sh@20 -- # continue 00:02:49.910 21:08:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.170 21:08:04 -- setup/acl.sh@19 -- # [[ 0000:74:02.0 == *:*:*.* ]] 00:02:50.170 21:08:04 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:50.170 21:08:04 -- setup/acl.sh@20 -- # continue 00:02:50.170 21:08:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.170 21:08:04 -- setup/acl.sh@19 -- # [[ 0000:79:01.0 == *:*:*.* ]] 00:02:50.170 21:08:04 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:50.170 21:08:04 -- setup/acl.sh@20 -- # continue 00:02:50.170 21:08:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.170 21:08:04 -- setup/acl.sh@19 -- # [[ 0000:79:02.0 == *:*:*.* ]] 00:02:50.170 21:08:04 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:50.170 21:08:04 -- setup/acl.sh@20 -- # continue 00:02:50.171 21:08:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.171 21:08:04 -- setup/acl.sh@19 -- # [[ 0000:c9:00.0 == *:*:*.* ]] 00:02:50.171 21:08:04 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:50.171 21:08:04 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:02:50.171 21:08:04 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:50.171 21:08:04 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:50.171 21:08:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.171 21:08:05 -- setup/acl.sh@19 -- # [[ 0000:ca:00.0 == *:*:*.* ]] 00:02:50.171 21:08:05 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:50.171 21:08:05 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\a\:\0\0\.\0* ]] 00:02:50.171 21:08:05 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:50.171 21:08:05 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:50.171 21:08:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.171 21:08:05 -- setup/acl.sh@19 -- # [[ 0000:cb:00.0 == *:*:*.* ]] 00:02:50.171 21:08:05 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:50.171 21:08:05 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\b\:\0\0\.\0* ]] 00:02:50.171 21:08:05 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:50.171 21:08:05 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:50.171 21:08:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.171 21:08:05 -- setup/acl.sh@19 -- # [[ 0000:e7:01.0 == *:*:*.* ]] 00:02:50.171 21:08:05 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:50.171 21:08:05 -- setup/acl.sh@20 -- # continue 00:02:50.171 21:08:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.431 21:08:05 -- setup/acl.sh@19 -- # [[ 0000:e7:02.0 == *:*:*.* ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # continue 00:02:50.431 21:08:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.431 21:08:05 -- setup/acl.sh@19 -- # [[ 0000:ec:01.0 == *:*:*.* ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # continue 00:02:50.431 21:08:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.431 21:08:05 -- setup/acl.sh@19 -- # [[ 0000:ec:02.0 == *:*:*.* ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # continue 00:02:50.431 21:08:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.431 21:08:05 -- setup/acl.sh@19 -- # [[ 0000:f1:01.0 == *:*:*.* ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # continue 00:02:50.431 21:08:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.431 21:08:05 -- setup/acl.sh@19 -- # [[ 0000:f1:02.0 == *:*:*.* ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # continue 00:02:50.431 21:08:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.431 21:08:05 -- setup/acl.sh@19 -- # [[ 0000:f6:01.0 == *:*:*.* ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # continue 00:02:50.431 21:08:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.431 21:08:05 -- setup/acl.sh@19 -- # [[ 0000:f6:02.0 == *:*:*.* ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:50.431 21:08:05 -- setup/acl.sh@20 -- # continue 00:02:50.431 21:08:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:50.431 21:08:05 -- setup/acl.sh@24 -- # (( 3 > 0 )) 00:02:50.431 21:08:05 -- setup/acl.sh@54 -- # run_test denied denied 00:02:50.431 21:08:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:50.431 21:08:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:50.431 21:08:05 -- common/autotest_common.sh@10 -- # set +x 00:02:50.431 ************************************ 00:02:50.431 START TEST denied 00:02:50.431 ************************************ 00:02:50.431 21:08:05 -- common/autotest_common.sh@1111 -- # denied 00:02:50.431 21:08:05 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:c9:00.0' 00:02:50.431 21:08:05 -- setup/acl.sh@38 -- # setup output config 00:02:50.431 21:08:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.431 21:08:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:50.431 21:08:05 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:c9:00.0' 00:02:57.018 0000:c9:00.0 (8086 0a54): Skipping denied controller at 0000:c9:00.0 00:02:57.018 21:08:10 -- setup/acl.sh@40 -- # verify 0000:c9:00.0 00:02:57.018 21:08:10 -- setup/acl.sh@28 -- # local dev driver 00:02:57.018 21:08:10 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:57.018 21:08:10 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:c9:00.0 ]] 00:02:57.018 21:08:10 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:c9:00.0/driver 00:02:57.018 21:08:10 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:57.018 21:08:10 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:57.018 21:08:10 -- setup/acl.sh@41 -- # setup reset 00:02:57.018 21:08:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.018 21:08:10 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.224 00:03:01.224 real 0m10.726s 00:03:01.224 user 0m2.213s 00:03:01.224 sys 0m4.263s 00:03:01.224 21:08:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:01.224 21:08:15 -- common/autotest_common.sh@10 -- # set +x 00:03:01.224 ************************************ 00:03:01.224 END TEST denied 00:03:01.224 ************************************ 00:03:01.224 21:08:16 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:01.224 21:08:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:01.224 21:08:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:01.224 21:08:16 -- common/autotest_common.sh@10 -- # set +x 00:03:01.224 ************************************ 00:03:01.224 START TEST allowed 00:03:01.224 ************************************ 00:03:01.224 21:08:16 -- common/autotest_common.sh@1111 -- # allowed 00:03:01.224 21:08:16 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:c9:00.0 00:03:01.224 21:08:16 -- setup/acl.sh@45 -- # setup output config 00:03:01.224 21:08:16 -- setup/acl.sh@46 -- # grep -E '0000:c9:00.0 .*: nvme -> .*' 00:03:01.224 21:08:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.224 21:08:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:06.660 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:03:06.660 21:08:21 -- setup/acl.sh@47 -- # verify 0000:ca:00.0 0000:cb:00.0 00:03:06.660 21:08:21 -- setup/acl.sh@28 -- # local dev driver 00:03:06.660 21:08:21 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:06.660 21:08:21 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:ca:00.0 ]] 00:03:06.660 21:08:21 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:ca:00.0/driver 00:03:06.660 21:08:21 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:06.660 21:08:21 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:06.660 21:08:21 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:06.660 21:08:21 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:cb:00.0 ]] 00:03:06.660 21:08:21 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:cb:00.0/driver 00:03:06.660 21:08:21 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:06.660 21:08:21 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:06.660 21:08:21 -- setup/acl.sh@48 -- # setup reset 00:03:06.660 21:08:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:06.660 21:08:21 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.954 00:03:09.954 real 0m8.634s 00:03:09.954 user 0m2.130s 00:03:09.954 sys 0m4.135s 00:03:09.954 21:08:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:09.954 21:08:24 -- common/autotest_common.sh@10 -- # set +x 00:03:09.954 ************************************ 00:03:09.954 END TEST allowed 00:03:09.954 ************************************ 00:03:09.954 00:03:09.954 real 0m25.845s 00:03:09.954 user 0m6.503s 00:03:09.954 sys 0m12.482s 00:03:09.954 21:08:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:09.954 21:08:24 -- common/autotest_common.sh@10 -- # set +x 00:03:09.954 ************************************ 00:03:09.954 END TEST acl 00:03:09.954 ************************************ 00:03:09.954 21:08:24 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:03:09.954 21:08:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:09.954 21:08:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:09.954 21:08:24 -- common/autotest_common.sh@10 -- # set +x 00:03:10.216 ************************************ 00:03:10.216 START TEST hugepages 00:03:10.216 ************************************ 00:03:10.216 21:08:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:03:10.216 * Looking for test storage... 00:03:10.216 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:10.216 21:08:25 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:10.216 21:08:25 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:10.216 21:08:25 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:10.216 21:08:25 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:10.216 21:08:25 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:10.216 21:08:25 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:10.216 21:08:25 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:10.216 21:08:25 -- setup/common.sh@18 -- # local node= 00:03:10.216 21:08:25 -- setup/common.sh@19 -- # local var val 00:03:10.216 21:08:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.216 21:08:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.216 21:08:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.216 21:08:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.216 21:08:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.216 21:08:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.216 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.216 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 246539344 kB' 'MemAvailable: 246323824 kB' 'Buffers: 1308 kB' 'Cached: 8418516 kB' 'SwapCached: 0 kB' 'Active: 8631264 kB' 'Inactive: 439520 kB' 'Active(anon): 8059600 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660840 kB' 'Mapped: 205160 kB' 'Shmem: 7408640 kB' 'KReclaimable: 535248 kB' 'Slab: 1233072 kB' 'SReclaimable: 535248 kB' 'SUnreclaim: 697824 kB' 'KernelStack: 25696 kB' 'PageTables: 9948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 138150540 kB' 'Committed_AS: 9689872 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 331132 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.217 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.217 21:08:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # continue 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.218 21:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.218 21:08:25 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:10.218 21:08:25 -- setup/common.sh@33 -- # echo 2048 00:03:10.218 21:08:25 -- setup/common.sh@33 -- # return 0 00:03:10.218 21:08:25 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:10.218 21:08:25 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:10.218 21:08:25 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:10.218 21:08:25 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:10.218 21:08:25 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:10.218 21:08:25 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:10.218 21:08:25 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:10.218 21:08:25 -- setup/hugepages.sh@207 -- # get_nodes 00:03:10.218 21:08:25 -- setup/hugepages.sh@27 -- # local node 00:03:10.218 21:08:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.218 21:08:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:10.218 21:08:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.218 21:08:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:10.218 21:08:25 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.218 21:08:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.218 21:08:25 -- setup/hugepages.sh@208 -- # clear_hp 00:03:10.218 21:08:25 -- setup/hugepages.sh@37 -- # local node hp 00:03:10.218 21:08:25 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:10.218 21:08:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.218 21:08:25 -- setup/hugepages.sh@41 -- # echo 0 00:03:10.218 21:08:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.218 21:08:25 -- setup/hugepages.sh@41 -- # echo 0 00:03:10.218 21:08:25 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:10.218 21:08:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.218 21:08:25 -- setup/hugepages.sh@41 -- # echo 0 00:03:10.218 21:08:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.218 21:08:25 -- setup/hugepages.sh@41 -- # echo 0 00:03:10.218 21:08:25 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:10.218 21:08:25 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:10.218 21:08:25 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:10.218 21:08:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:10.218 21:08:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:10.218 21:08:25 -- common/autotest_common.sh@10 -- # set +x 00:03:10.218 ************************************ 00:03:10.218 START TEST default_setup 00:03:10.218 ************************************ 00:03:10.218 21:08:25 -- common/autotest_common.sh@1111 -- # default_setup 00:03:10.218 21:08:25 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:10.218 21:08:25 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:10.218 21:08:25 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:10.218 21:08:25 -- setup/hugepages.sh@51 -- # shift 00:03:10.218 21:08:25 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:10.218 21:08:25 -- setup/hugepages.sh@52 -- # local node_ids 00:03:10.218 21:08:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.218 21:08:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:10.218 21:08:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:10.218 21:08:25 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:10.218 21:08:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.218 21:08:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:10.218 21:08:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.218 21:08:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.218 21:08:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.218 21:08:25 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:10.218 21:08:25 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:10.218 21:08:25 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:10.218 21:08:25 -- setup/hugepages.sh@73 -- # return 0 00:03:10.218 21:08:25 -- setup/hugepages.sh@137 -- # setup output 00:03:10.218 21:08:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.218 21:08:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:13.514 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:13.514 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:13.514 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:13.514 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:13.514 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:13.514 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:13.514 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:13.514 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:13.514 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:13.514 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:13.514 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:13.514 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:13.514 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:13.514 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:13.514 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:13.514 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:03:15.421 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.421 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.421 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:03:16.232 21:08:30 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:16.232 21:08:30 -- setup/hugepages.sh@89 -- # local node 00:03:16.232 21:08:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.232 21:08:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.232 21:08:30 -- setup/hugepages.sh@92 -- # local surp 00:03:16.232 21:08:30 -- setup/hugepages.sh@93 -- # local resv 00:03:16.232 21:08:30 -- setup/hugepages.sh@94 -- # local anon 00:03:16.232 21:08:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.232 21:08:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.232 21:08:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.232 21:08:30 -- setup/common.sh@18 -- # local node= 00:03:16.232 21:08:30 -- setup/common.sh@19 -- # local var val 00:03:16.232 21:08:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.232 21:08:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.232 21:08:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.232 21:08:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.232 21:08:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.232 21:08:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.232 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.232 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.232 21:08:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248916168 kB' 'MemAvailable: 248699644 kB' 'Buffers: 1308 kB' 'Cached: 8418780 kB' 'SwapCached: 0 kB' 'Active: 8658996 kB' 'Inactive: 439520 kB' 'Active(anon): 8087332 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687720 kB' 'Mapped: 205132 kB' 'Shmem: 7408904 kB' 'KReclaimable: 533240 kB' 'Slab: 1221596 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 688356 kB' 'KernelStack: 25856 kB' 'PageTables: 11736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9776984 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 331180 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:16.232 21:08:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.232 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.232 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.232 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.233 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.233 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.234 21:08:30 -- setup/common.sh@33 -- # echo 0 00:03:16.234 21:08:30 -- setup/common.sh@33 -- # return 0 00:03:16.234 21:08:30 -- setup/hugepages.sh@97 -- # anon=0 00:03:16.234 21:08:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.234 21:08:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.234 21:08:30 -- setup/common.sh@18 -- # local node= 00:03:16.234 21:08:30 -- setup/common.sh@19 -- # local var val 00:03:16.234 21:08:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.234 21:08:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.234 21:08:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.234 21:08:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.234 21:08:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.234 21:08:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248921380 kB' 'MemAvailable: 248704856 kB' 'Buffers: 1308 kB' 'Cached: 8418780 kB' 'SwapCached: 0 kB' 'Active: 8659540 kB' 'Inactive: 439520 kB' 'Active(anon): 8087876 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688232 kB' 'Mapped: 205132 kB' 'Shmem: 7408904 kB' 'KReclaimable: 533240 kB' 'Slab: 1221548 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 688308 kB' 'KernelStack: 25696 kB' 'PageTables: 11204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9774316 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 331068 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.234 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.234 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.235 21:08:30 -- setup/common.sh@33 -- # echo 0 00:03:16.235 21:08:30 -- setup/common.sh@33 -- # return 0 00:03:16.235 21:08:30 -- setup/hugepages.sh@99 -- # surp=0 00:03:16.235 21:08:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.235 21:08:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.235 21:08:30 -- setup/common.sh@18 -- # local node= 00:03:16.235 21:08:30 -- setup/common.sh@19 -- # local var val 00:03:16.235 21:08:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.235 21:08:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.235 21:08:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.235 21:08:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.235 21:08:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.235 21:08:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248922544 kB' 'MemAvailable: 248706020 kB' 'Buffers: 1308 kB' 'Cached: 8418780 kB' 'SwapCached: 0 kB' 'Active: 8658092 kB' 'Inactive: 439520 kB' 'Active(anon): 8086428 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 686844 kB' 'Mapped: 205120 kB' 'Shmem: 7408904 kB' 'KReclaimable: 533240 kB' 'Slab: 1221316 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 688076 kB' 'KernelStack: 25584 kB' 'PageTables: 11156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9774328 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 331004 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.235 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.235 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.236 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.236 21:08:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.236 21:08:30 -- setup/common.sh@33 -- # echo 0 00:03:16.236 21:08:30 -- setup/common.sh@33 -- # return 0 00:03:16.236 21:08:30 -- setup/hugepages.sh@100 -- # resv=0 00:03:16.236 21:08:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.236 nr_hugepages=1024 00:03:16.236 21:08:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.236 resv_hugepages=0 00:03:16.237 21:08:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.237 surplus_hugepages=0 00:03:16.237 21:08:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.237 anon_hugepages=0 00:03:16.237 21:08:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.237 21:08:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.237 21:08:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.237 21:08:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.237 21:08:30 -- setup/common.sh@18 -- # local node= 00:03:16.237 21:08:30 -- setup/common.sh@19 -- # local var val 00:03:16.237 21:08:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.237 21:08:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.237 21:08:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.237 21:08:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.237 21:08:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.237 21:08:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248922444 kB' 'MemAvailable: 248705920 kB' 'Buffers: 1308 kB' 'Cached: 8418780 kB' 'SwapCached: 0 kB' 'Active: 8658496 kB' 'Inactive: 439520 kB' 'Active(anon): 8086832 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687208 kB' 'Mapped: 205012 kB' 'Shmem: 7408904 kB' 'KReclaimable: 533240 kB' 'Slab: 1221324 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 688084 kB' 'KernelStack: 25584 kB' 'PageTables: 11152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9774344 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 331004 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.237 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.237 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.238 21:08:30 -- setup/common.sh@33 -- # echo 1024 00:03:16.238 21:08:30 -- setup/common.sh@33 -- # return 0 00:03:16.238 21:08:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.238 21:08:30 -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.238 21:08:30 -- setup/hugepages.sh@27 -- # local node 00:03:16.238 21:08:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.238 21:08:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:16.238 21:08:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.238 21:08:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:16.238 21:08:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.238 21:08:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.238 21:08:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.238 21:08:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.238 21:08:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.238 21:08:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.238 21:08:30 -- setup/common.sh@18 -- # local node=0 00:03:16.238 21:08:30 -- setup/common.sh@19 -- # local var val 00:03:16.238 21:08:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.238 21:08:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.238 21:08:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.238 21:08:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.238 21:08:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.238 21:08:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131767928 kB' 'MemFree: 124319176 kB' 'MemUsed: 7448752 kB' 'SwapCached: 0 kB' 'Active: 3279436 kB' 'Inactive: 317048 kB' 'Active(anon): 2831028 kB' 'Inactive(anon): 0 kB' 'Active(file): 448408 kB' 'Inactive(file): 317048 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3430920 kB' 'Mapped: 144784 kB' 'AnonPages: 174696 kB' 'Shmem: 2665464 kB' 'KernelStack: 13432 kB' 'PageTables: 5040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255240 kB' 'Slab: 630552 kB' 'SReclaimable: 255240 kB' 'SUnreclaim: 375312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.238 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.238 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # continue 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.239 21:08:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.239 21:08:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.239 21:08:30 -- setup/common.sh@33 -- # echo 0 00:03:16.239 21:08:30 -- setup/common.sh@33 -- # return 0 00:03:16.239 21:08:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.239 21:08:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.239 21:08:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.239 21:08:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.239 21:08:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:16.239 node0=1024 expecting 1024 00:03:16.239 21:08:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:16.239 00:03:16.239 real 0m5.651s 00:03:16.239 user 0m1.199s 00:03:16.239 sys 0m2.139s 00:03:16.239 21:08:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:16.239 21:08:30 -- common/autotest_common.sh@10 -- # set +x 00:03:16.239 ************************************ 00:03:16.239 END TEST default_setup 00:03:16.239 ************************************ 00:03:16.239 21:08:30 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:16.239 21:08:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:16.239 21:08:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:16.239 21:08:30 -- common/autotest_common.sh@10 -- # set +x 00:03:16.239 ************************************ 00:03:16.239 START TEST per_node_1G_alloc 00:03:16.239 ************************************ 00:03:16.239 21:08:30 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:16.239 21:08:30 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:16.239 21:08:30 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:16.239 21:08:30 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:16.239 21:08:30 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:16.239 21:08:30 -- setup/hugepages.sh@51 -- # shift 00:03:16.239 21:08:30 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:16.239 21:08:30 -- setup/hugepages.sh@52 -- # local node_ids 00:03:16.239 21:08:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.239 21:08:30 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:16.239 21:08:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:16.239 21:08:30 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:16.239 21:08:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.239 21:08:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:16.239 21:08:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.239 21:08:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.239 21:08:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.239 21:08:30 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:16.240 21:08:30 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.240 21:08:30 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.240 21:08:30 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.240 21:08:30 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.240 21:08:30 -- setup/hugepages.sh@73 -- # return 0 00:03:16.240 21:08:30 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:16.240 21:08:30 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:16.240 21:08:30 -- setup/hugepages.sh@146 -- # setup output 00:03:16.240 21:08:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.240 21:08:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:18.783 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.783 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:18.783 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:18.783 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:18.783 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.783 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:18.783 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:18.783 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:18.783 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:18.783 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:18.783 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:18.783 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:18.783 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:18.783 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:18.783 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:18.783 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.783 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:18.783 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:18.783 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:19.046 21:08:33 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:19.046 21:08:33 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:19.046 21:08:33 -- setup/hugepages.sh@89 -- # local node 00:03:19.046 21:08:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.046 21:08:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.046 21:08:33 -- setup/hugepages.sh@92 -- # local surp 00:03:19.046 21:08:33 -- setup/hugepages.sh@93 -- # local resv 00:03:19.046 21:08:33 -- setup/hugepages.sh@94 -- # local anon 00:03:19.046 21:08:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.046 21:08:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.046 21:08:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.046 21:08:33 -- setup/common.sh@18 -- # local node= 00:03:19.046 21:08:33 -- setup/common.sh@19 -- # local var val 00:03:19.046 21:08:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.046 21:08:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.046 21:08:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.046 21:08:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.046 21:08:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.046 21:08:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248938724 kB' 'MemAvailable: 248722200 kB' 'Buffers: 1308 kB' 'Cached: 8418888 kB' 'SwapCached: 0 kB' 'Active: 8645452 kB' 'Inactive: 439520 kB' 'Active(anon): 8073788 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 673956 kB' 'Mapped: 204000 kB' 'Shmem: 7409012 kB' 'KReclaimable: 533240 kB' 'Slab: 1220572 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 687332 kB' 'KernelStack: 25216 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9693312 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330812 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.046 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.046 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.047 21:08:33 -- setup/common.sh@33 -- # echo 0 00:03:19.047 21:08:33 -- setup/common.sh@33 -- # return 0 00:03:19.047 21:08:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:19.047 21:08:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.047 21:08:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.047 21:08:33 -- setup/common.sh@18 -- # local node= 00:03:19.047 21:08:33 -- setup/common.sh@19 -- # local var val 00:03:19.047 21:08:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.047 21:08:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.047 21:08:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.047 21:08:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.047 21:08:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.047 21:08:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248938244 kB' 'MemAvailable: 248721720 kB' 'Buffers: 1308 kB' 'Cached: 8418904 kB' 'SwapCached: 0 kB' 'Active: 8646016 kB' 'Inactive: 439520 kB' 'Active(anon): 8074352 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674512 kB' 'Mapped: 204000 kB' 'Shmem: 7409028 kB' 'KReclaimable: 533240 kB' 'Slab: 1220556 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 687316 kB' 'KernelStack: 25216 kB' 'PageTables: 9244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9693696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330780 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.047 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.047 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.048 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.048 21:08:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.048 21:08:33 -- setup/common.sh@33 -- # echo 0 00:03:19.048 21:08:33 -- setup/common.sh@33 -- # return 0 00:03:19.048 21:08:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:19.048 21:08:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.048 21:08:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.048 21:08:33 -- setup/common.sh@18 -- # local node= 00:03:19.048 21:08:33 -- setup/common.sh@19 -- # local var val 00:03:19.048 21:08:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.048 21:08:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.048 21:08:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.048 21:08:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.048 21:08:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.048 21:08:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248937992 kB' 'MemAvailable: 248721468 kB' 'Buffers: 1308 kB' 'Cached: 8418916 kB' 'SwapCached: 0 kB' 'Active: 8645532 kB' 'Inactive: 439520 kB' 'Active(anon): 8073868 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674048 kB' 'Mapped: 203952 kB' 'Shmem: 7409040 kB' 'KReclaimable: 533240 kB' 'Slab: 1220588 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 687348 kB' 'KernelStack: 25248 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9693712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330764 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.049 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.049 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.050 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.050 21:08:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.051 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.051 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 21:08:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.051 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.051 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 21:08:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.051 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.051 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 21:08:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.051 21:08:33 -- setup/common.sh@32 -- # continue 00:03:19.051 21:08:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.051 21:08:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.051 21:08:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.051 21:08:34 -- setup/common.sh@33 -- # echo 0 00:03:19.051 21:08:34 -- setup/common.sh@33 -- # return 0 00:03:19.051 21:08:34 -- setup/hugepages.sh@100 -- # resv=0 00:03:19.051 21:08:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:19.051 nr_hugepages=1024 00:03:19.051 21:08:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.051 resv_hugepages=0 00:03:19.051 21:08:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.051 surplus_hugepages=0 00:03:19.051 21:08:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.051 anon_hugepages=0 00:03:19.051 21:08:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.051 21:08:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:19.051 21:08:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.051 21:08:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.051 21:08:34 -- setup/common.sh@18 -- # local node= 00:03:19.051 21:08:34 -- setup/common.sh@19 -- # local var val 00:03:19.051 21:08:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.051 21:08:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.051 21:08:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.051 21:08:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.051 21:08:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.051 21:08:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.314 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.314 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248938244 kB' 'MemAvailable: 248721720 kB' 'Buffers: 1308 kB' 'Cached: 8418932 kB' 'SwapCached: 0 kB' 'Active: 8645512 kB' 'Inactive: 439520 kB' 'Active(anon): 8073848 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674020 kB' 'Mapped: 203952 kB' 'Shmem: 7409056 kB' 'KReclaimable: 533240 kB' 'Slab: 1220588 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 687348 kB' 'KernelStack: 25248 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9693728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330780 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.315 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.315 21:08:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.316 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.316 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.317 21:08:34 -- setup/common.sh@33 -- # echo 1024 00:03:19.317 21:08:34 -- setup/common.sh@33 -- # return 0 00:03:19.317 21:08:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.317 21:08:34 -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.317 21:08:34 -- setup/hugepages.sh@27 -- # local node 00:03:19.317 21:08:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.317 21:08:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.317 21:08:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.317 21:08:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.317 21:08:34 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.317 21:08:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.317 21:08:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.317 21:08:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.317 21:08:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.317 21:08:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.317 21:08:34 -- setup/common.sh@18 -- # local node=0 00:03:19.317 21:08:34 -- setup/common.sh@19 -- # local var val 00:03:19.317 21:08:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.317 21:08:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.317 21:08:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.317 21:08:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.317 21:08:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.317 21:08:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131767928 kB' 'MemFree: 125361008 kB' 'MemUsed: 6406920 kB' 'SwapCached: 0 kB' 'Active: 3273068 kB' 'Inactive: 317048 kB' 'Active(anon): 2824660 kB' 'Inactive(anon): 0 kB' 'Active(file): 448408 kB' 'Inactive(file): 317048 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3431000 kB' 'Mapped: 143724 kB' 'AnonPages: 168212 kB' 'Shmem: 2665544 kB' 'KernelStack: 13448 kB' 'PageTables: 4720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255240 kB' 'Slab: 630540 kB' 'SReclaimable: 255240 kB' 'SUnreclaim: 375300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.317 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.317 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@33 -- # echo 0 00:03:19.318 21:08:34 -- setup/common.sh@33 -- # return 0 00:03:19.318 21:08:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.318 21:08:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.318 21:08:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.318 21:08:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:19.318 21:08:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.318 21:08:34 -- setup/common.sh@18 -- # local node=1 00:03:19.318 21:08:34 -- setup/common.sh@19 -- # local var val 00:03:19.318 21:08:34 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.318 21:08:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.318 21:08:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.318 21:08:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.318 21:08:34 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.318 21:08:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131950252 kB' 'MemFree: 123577380 kB' 'MemUsed: 8372872 kB' 'SwapCached: 0 kB' 'Active: 5372416 kB' 'Inactive: 122472 kB' 'Active(anon): 5249160 kB' 'Inactive(anon): 0 kB' 'Active(file): 123256 kB' 'Inactive(file): 122472 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4989256 kB' 'Mapped: 60228 kB' 'AnonPages: 505760 kB' 'Shmem: 4743528 kB' 'KernelStack: 11784 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 278000 kB' 'Slab: 590048 kB' 'SReclaimable: 278000 kB' 'SUnreclaim: 312048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.318 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.318 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # continue 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.319 21:08:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.319 21:08:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.319 21:08:34 -- setup/common.sh@33 -- # echo 0 00:03:19.319 21:08:34 -- setup/common.sh@33 -- # return 0 00:03:19.319 21:08:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.319 21:08:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.319 21:08:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.319 21:08:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.319 21:08:34 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:19.319 node0=512 expecting 512 00:03:19.319 21:08:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.319 21:08:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.319 21:08:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.319 21:08:34 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:19.319 node1=512 expecting 512 00:03:19.319 21:08:34 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:19.319 00:03:19.319 real 0m3.125s 00:03:19.319 user 0m1.111s 00:03:19.319 sys 0m1.860s 00:03:19.319 21:08:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:19.319 21:08:34 -- common/autotest_common.sh@10 -- # set +x 00:03:19.319 ************************************ 00:03:19.319 END TEST per_node_1G_alloc 00:03:19.319 ************************************ 00:03:19.319 21:08:34 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:19.319 21:08:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:19.319 21:08:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:19.319 21:08:34 -- common/autotest_common.sh@10 -- # set +x 00:03:19.319 ************************************ 00:03:19.319 START TEST even_2G_alloc 00:03:19.319 ************************************ 00:03:19.319 21:08:34 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:19.319 21:08:34 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:19.319 21:08:34 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.319 21:08:34 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:19.319 21:08:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.319 21:08:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.319 21:08:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:19.319 21:08:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.319 21:08:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.319 21:08:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.319 21:08:34 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.319 21:08:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.319 21:08:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.319 21:08:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.319 21:08:34 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:19.319 21:08:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.319 21:08:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:19.319 21:08:34 -- setup/hugepages.sh@83 -- # : 512 00:03:19.319 21:08:34 -- setup/hugepages.sh@84 -- # : 1 00:03:19.319 21:08:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.319 21:08:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:19.319 21:08:34 -- setup/hugepages.sh@83 -- # : 0 00:03:19.319 21:08:34 -- setup/hugepages.sh@84 -- # : 0 00:03:19.319 21:08:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.319 21:08:34 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:19.319 21:08:34 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:19.319 21:08:34 -- setup/hugepages.sh@153 -- # setup output 00:03:19.319 21:08:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.319 21:08:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:22.013 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:22.013 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.013 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:22.013 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:22.013 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.013 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:22.013 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:22.013 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:22.013 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:22.013 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:22.013 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:22.013 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:22.013 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:22.013 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:22.013 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:22.013 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.013 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:22.013 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:22.013 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:22.276 21:08:37 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:22.276 21:08:37 -- setup/hugepages.sh@89 -- # local node 00:03:22.276 21:08:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.276 21:08:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.276 21:08:37 -- setup/hugepages.sh@92 -- # local surp 00:03:22.276 21:08:37 -- setup/hugepages.sh@93 -- # local resv 00:03:22.276 21:08:37 -- setup/hugepages.sh@94 -- # local anon 00:03:22.276 21:08:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.276 21:08:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.276 21:08:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.276 21:08:37 -- setup/common.sh@18 -- # local node= 00:03:22.276 21:08:37 -- setup/common.sh@19 -- # local var val 00:03:22.276 21:08:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.276 21:08:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.276 21:08:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.276 21:08:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.276 21:08:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.276 21:08:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248919000 kB' 'MemAvailable: 248702476 kB' 'Buffers: 1308 kB' 'Cached: 8419024 kB' 'SwapCached: 0 kB' 'Active: 8646976 kB' 'Inactive: 439520 kB' 'Active(anon): 8075312 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674916 kB' 'Mapped: 204124 kB' 'Shmem: 7409148 kB' 'KReclaimable: 533240 kB' 'Slab: 1221352 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 688112 kB' 'KernelStack: 25424 kB' 'PageTables: 9888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9692068 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330892 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.276 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.276 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.277 21:08:37 -- setup/common.sh@33 -- # echo 0 00:03:22.277 21:08:37 -- setup/common.sh@33 -- # return 0 00:03:22.277 21:08:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:22.277 21:08:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.277 21:08:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.277 21:08:37 -- setup/common.sh@18 -- # local node= 00:03:22.277 21:08:37 -- setup/common.sh@19 -- # local var val 00:03:22.277 21:08:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.277 21:08:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.277 21:08:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.277 21:08:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.277 21:08:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.277 21:08:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248919276 kB' 'MemAvailable: 248702752 kB' 'Buffers: 1308 kB' 'Cached: 8419028 kB' 'SwapCached: 0 kB' 'Active: 8647152 kB' 'Inactive: 439520 kB' 'Active(anon): 8075488 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 675124 kB' 'Mapped: 204124 kB' 'Shmem: 7409152 kB' 'KReclaimable: 533240 kB' 'Slab: 1221352 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 688112 kB' 'KernelStack: 25408 kB' 'PageTables: 9844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9692080 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330876 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.277 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.277 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.278 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.278 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.278 21:08:37 -- setup/common.sh@33 -- # echo 0 00:03:22.278 21:08:37 -- setup/common.sh@33 -- # return 0 00:03:22.542 21:08:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:22.542 21:08:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.542 21:08:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.542 21:08:37 -- setup/common.sh@18 -- # local node= 00:03:22.542 21:08:37 -- setup/common.sh@19 -- # local var val 00:03:22.542 21:08:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.542 21:08:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.542 21:08:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.542 21:08:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.542 21:08:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.542 21:08:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.542 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.542 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248918004 kB' 'MemAvailable: 248701480 kB' 'Buffers: 1308 kB' 'Cached: 8419036 kB' 'SwapCached: 0 kB' 'Active: 8646012 kB' 'Inactive: 439520 kB' 'Active(anon): 8074348 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674348 kB' 'Mapped: 203984 kB' 'Shmem: 7409160 kB' 'KReclaimable: 533240 kB' 'Slab: 1221352 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 688112 kB' 'KernelStack: 25392 kB' 'PageTables: 9704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9692092 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330876 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.543 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.543 21:08:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.544 21:08:37 -- setup/common.sh@33 -- # echo 0 00:03:22.544 21:08:37 -- setup/common.sh@33 -- # return 0 00:03:22.544 21:08:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:22.544 21:08:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.544 nr_hugepages=1024 00:03:22.544 21:08:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.544 resv_hugepages=0 00:03:22.544 21:08:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.544 surplus_hugepages=0 00:03:22.544 21:08:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.544 anon_hugepages=0 00:03:22.544 21:08:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.544 21:08:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.544 21:08:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.544 21:08:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.544 21:08:37 -- setup/common.sh@18 -- # local node= 00:03:22.544 21:08:37 -- setup/common.sh@19 -- # local var val 00:03:22.544 21:08:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.544 21:08:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.544 21:08:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.544 21:08:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.544 21:08:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.544 21:08:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248918016 kB' 'MemAvailable: 248701492 kB' 'Buffers: 1308 kB' 'Cached: 8419056 kB' 'SwapCached: 0 kB' 'Active: 8645808 kB' 'Inactive: 439520 kB' 'Active(anon): 8074144 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674172 kB' 'Mapped: 203984 kB' 'Shmem: 7409180 kB' 'KReclaimable: 533240 kB' 'Slab: 1221352 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 688112 kB' 'KernelStack: 25376 kB' 'PageTables: 9656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9692108 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330876 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.544 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.544 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.545 21:08:37 -- setup/common.sh@33 -- # echo 1024 00:03:22.545 21:08:37 -- setup/common.sh@33 -- # return 0 00:03:22.545 21:08:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.545 21:08:37 -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.545 21:08:37 -- setup/hugepages.sh@27 -- # local node 00:03:22.545 21:08:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.545 21:08:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.545 21:08:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.545 21:08:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.545 21:08:37 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.545 21:08:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.545 21:08:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.545 21:08:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.545 21:08:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.545 21:08:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.545 21:08:37 -- setup/common.sh@18 -- # local node=0 00:03:22.545 21:08:37 -- setup/common.sh@19 -- # local var val 00:03:22.545 21:08:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.545 21:08:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.545 21:08:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.545 21:08:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.545 21:08:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.545 21:08:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.545 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.545 21:08:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131767928 kB' 'MemFree: 125344916 kB' 'MemUsed: 6423012 kB' 'SwapCached: 0 kB' 'Active: 3272784 kB' 'Inactive: 317048 kB' 'Active(anon): 2824376 kB' 'Inactive(anon): 0 kB' 'Active(file): 448408 kB' 'Inactive(file): 317048 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3431076 kB' 'Mapped: 143760 kB' 'AnonPages: 167880 kB' 'Shmem: 2665620 kB' 'KernelStack: 13592 kB' 'PageTables: 4980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255240 kB' 'Slab: 630604 kB' 'SReclaimable: 255240 kB' 'SUnreclaim: 375364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.545 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.546 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.546 21:08:37 -- setup/common.sh@33 -- # echo 0 00:03:22.546 21:08:37 -- setup/common.sh@33 -- # return 0 00:03:22.546 21:08:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.546 21:08:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.546 21:08:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.546 21:08:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.546 21:08:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.546 21:08:37 -- setup/common.sh@18 -- # local node=1 00:03:22.546 21:08:37 -- setup/common.sh@19 -- # local var val 00:03:22.546 21:08:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.546 21:08:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.546 21:08:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.546 21:08:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.546 21:08:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.546 21:08:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.546 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131950252 kB' 'MemFree: 123573352 kB' 'MemUsed: 8376900 kB' 'SwapCached: 0 kB' 'Active: 5373040 kB' 'Inactive: 122472 kB' 'Active(anon): 5249784 kB' 'Inactive(anon): 0 kB' 'Active(file): 123256 kB' 'Inactive(file): 122472 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4989288 kB' 'Mapped: 60224 kB' 'AnonPages: 506380 kB' 'Shmem: 4743560 kB' 'KernelStack: 11784 kB' 'PageTables: 4648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 278000 kB' 'Slab: 590736 kB' 'SReclaimable: 278000 kB' 'SUnreclaim: 312736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.547 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.547 21:08:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # continue 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.548 21:08:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.548 21:08:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.548 21:08:37 -- setup/common.sh@33 -- # echo 0 00:03:22.548 21:08:37 -- setup/common.sh@33 -- # return 0 00:03:22.548 21:08:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.548 21:08:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.548 21:08:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.548 21:08:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.548 21:08:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.548 node0=512 expecting 512 00:03:22.548 21:08:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.548 21:08:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.548 21:08:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.548 21:08:37 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:22.548 node1=512 expecting 512 00:03:22.548 21:08:37 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:22.548 00:03:22.548 real 0m3.128s 00:03:22.548 user 0m1.090s 00:03:22.548 sys 0m1.892s 00:03:22.548 21:08:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:22.548 21:08:37 -- common/autotest_common.sh@10 -- # set +x 00:03:22.548 ************************************ 00:03:22.548 END TEST even_2G_alloc 00:03:22.548 ************************************ 00:03:22.548 21:08:37 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:22.548 21:08:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:22.548 21:08:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:22.548 21:08:37 -- common/autotest_common.sh@10 -- # set +x 00:03:22.548 ************************************ 00:03:22.548 START TEST odd_alloc 00:03:22.548 ************************************ 00:03:22.548 21:08:37 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:22.548 21:08:37 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:22.548 21:08:37 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:22.548 21:08:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.548 21:08:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.548 21:08:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:22.548 21:08:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.548 21:08:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.548 21:08:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.548 21:08:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:22.548 21:08:37 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.548 21:08:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.548 21:08:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.548 21:08:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.548 21:08:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.548 21:08:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.548 21:08:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.548 21:08:37 -- setup/hugepages.sh@83 -- # : 513 00:03:22.548 21:08:37 -- setup/hugepages.sh@84 -- # : 1 00:03:22.548 21:08:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.548 21:08:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:22.548 21:08:37 -- setup/hugepages.sh@83 -- # : 0 00:03:22.548 21:08:37 -- setup/hugepages.sh@84 -- # : 0 00:03:22.548 21:08:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.548 21:08:37 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:22.548 21:08:37 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:22.548 21:08:37 -- setup/hugepages.sh@160 -- # setup output 00:03:22.548 21:08:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.548 21:08:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:25.850 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:25.850 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.850 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:25.850 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:25.850 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.850 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:25.850 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:25.850 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:25.850 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:25.850 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:25.850 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:25.850 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:25.850 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:25.850 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:25.850 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:25.850 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.850 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:25.850 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:25.850 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:25.850 21:08:40 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:25.850 21:08:40 -- setup/hugepages.sh@89 -- # local node 00:03:25.850 21:08:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.850 21:08:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.850 21:08:40 -- setup/hugepages.sh@92 -- # local surp 00:03:25.850 21:08:40 -- setup/hugepages.sh@93 -- # local resv 00:03:25.850 21:08:40 -- setup/hugepages.sh@94 -- # local anon 00:03:25.850 21:08:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.850 21:08:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.850 21:08:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.850 21:08:40 -- setup/common.sh@18 -- # local node= 00:03:25.850 21:08:40 -- setup/common.sh@19 -- # local var val 00:03:25.850 21:08:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.850 21:08:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.850 21:08:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.850 21:08:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.850 21:08:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.850 21:08:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.850 21:08:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248927900 kB' 'MemAvailable: 248711376 kB' 'Buffers: 1308 kB' 'Cached: 8419160 kB' 'SwapCached: 0 kB' 'Active: 8646016 kB' 'Inactive: 439520 kB' 'Active(anon): 8074352 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674536 kB' 'Mapped: 204092 kB' 'Shmem: 7409284 kB' 'KReclaimable: 533240 kB' 'Slab: 1221196 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 687956 kB' 'KernelStack: 25392 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139198092 kB' 'Committed_AS: 9693452 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330860 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.850 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.850 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.851 21:08:40 -- setup/common.sh@33 -- # echo 0 00:03:25.851 21:08:40 -- setup/common.sh@33 -- # return 0 00:03:25.851 21:08:40 -- setup/hugepages.sh@97 -- # anon=0 00:03:25.851 21:08:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.851 21:08:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.851 21:08:40 -- setup/common.sh@18 -- # local node= 00:03:25.851 21:08:40 -- setup/common.sh@19 -- # local var val 00:03:25.851 21:08:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.851 21:08:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.851 21:08:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.851 21:08:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.851 21:08:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.851 21:08:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248928152 kB' 'MemAvailable: 248711628 kB' 'Buffers: 1308 kB' 'Cached: 8419160 kB' 'SwapCached: 0 kB' 'Active: 8646236 kB' 'Inactive: 439520 kB' 'Active(anon): 8074572 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674784 kB' 'Mapped: 204092 kB' 'Shmem: 7409284 kB' 'KReclaimable: 533240 kB' 'Slab: 1221196 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 687956 kB' 'KernelStack: 25376 kB' 'PageTables: 9340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139198092 kB' 'Committed_AS: 9693464 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330844 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.851 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.851 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.852 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.852 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.853 21:08:40 -- setup/common.sh@33 -- # echo 0 00:03:25.853 21:08:40 -- setup/common.sh@33 -- # return 0 00:03:25.853 21:08:40 -- setup/hugepages.sh@99 -- # surp=0 00:03:25.853 21:08:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.853 21:08:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.853 21:08:40 -- setup/common.sh@18 -- # local node= 00:03:25.853 21:08:40 -- setup/common.sh@19 -- # local var val 00:03:25.853 21:08:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.853 21:08:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.853 21:08:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.853 21:08:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.853 21:08:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.853 21:08:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248928228 kB' 'MemAvailable: 248711704 kB' 'Buffers: 1308 kB' 'Cached: 8419172 kB' 'SwapCached: 0 kB' 'Active: 8646016 kB' 'Inactive: 439520 kB' 'Active(anon): 8074352 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674464 kB' 'Mapped: 204036 kB' 'Shmem: 7409296 kB' 'KReclaimable: 533240 kB' 'Slab: 1221296 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 688056 kB' 'KernelStack: 25392 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139198092 kB' 'Committed_AS: 9693476 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330876 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.853 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.853 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.854 21:08:40 -- setup/common.sh@33 -- # echo 0 00:03:25.854 21:08:40 -- setup/common.sh@33 -- # return 0 00:03:25.854 21:08:40 -- setup/hugepages.sh@100 -- # resv=0 00:03:25.854 21:08:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:25.854 nr_hugepages=1025 00:03:25.854 21:08:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.854 resv_hugepages=0 00:03:25.854 21:08:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.854 surplus_hugepages=0 00:03:25.854 21:08:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.854 anon_hugepages=0 00:03:25.854 21:08:40 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.854 21:08:40 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:25.854 21:08:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.854 21:08:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.854 21:08:40 -- setup/common.sh@18 -- # local node= 00:03:25.854 21:08:40 -- setup/common.sh@19 -- # local var val 00:03:25.854 21:08:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.854 21:08:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.854 21:08:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.854 21:08:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.854 21:08:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.854 21:08:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248927724 kB' 'MemAvailable: 248711200 kB' 'Buffers: 1308 kB' 'Cached: 8419176 kB' 'SwapCached: 0 kB' 'Active: 8646044 kB' 'Inactive: 439520 kB' 'Active(anon): 8074380 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674544 kB' 'Mapped: 204096 kB' 'Shmem: 7409300 kB' 'KReclaimable: 533240 kB' 'Slab: 1221296 kB' 'SReclaimable: 533240 kB' 'SUnreclaim: 688056 kB' 'KernelStack: 25392 kB' 'PageTables: 9264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139198092 kB' 'Committed_AS: 9693492 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330876 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.854 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.854 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.855 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.855 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.856 21:08:40 -- setup/common.sh@33 -- # echo 1025 00:03:25.856 21:08:40 -- setup/common.sh@33 -- # return 0 00:03:25.856 21:08:40 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.856 21:08:40 -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.856 21:08:40 -- setup/hugepages.sh@27 -- # local node 00:03:25.856 21:08:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.856 21:08:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.856 21:08:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.856 21:08:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:25.856 21:08:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.856 21:08:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.856 21:08:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.856 21:08:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.856 21:08:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.856 21:08:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.856 21:08:40 -- setup/common.sh@18 -- # local node=0 00:03:25.856 21:08:40 -- setup/common.sh@19 -- # local var val 00:03:25.856 21:08:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.856 21:08:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.856 21:08:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.856 21:08:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.856 21:08:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.856 21:08:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131767928 kB' 'MemFree: 125368852 kB' 'MemUsed: 6399076 kB' 'SwapCached: 0 kB' 'Active: 3273500 kB' 'Inactive: 317048 kB' 'Active(anon): 2825092 kB' 'Inactive(anon): 0 kB' 'Active(file): 448408 kB' 'Inactive(file): 317048 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3431148 kB' 'Mapped: 143820 kB' 'AnonPages: 168584 kB' 'Shmem: 2665692 kB' 'KernelStack: 13560 kB' 'PageTables: 4756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255240 kB' 'Slab: 631424 kB' 'SReclaimable: 255240 kB' 'SUnreclaim: 376184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.856 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.856 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@33 -- # echo 0 00:03:25.857 21:08:40 -- setup/common.sh@33 -- # return 0 00:03:25.857 21:08:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.857 21:08:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.857 21:08:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.857 21:08:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.857 21:08:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.857 21:08:40 -- setup/common.sh@18 -- # local node=1 00:03:25.857 21:08:40 -- setup/common.sh@19 -- # local var val 00:03:25.857 21:08:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.857 21:08:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.857 21:08:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.857 21:08:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.857 21:08:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.857 21:08:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131950252 kB' 'MemFree: 123558648 kB' 'MemUsed: 8391604 kB' 'SwapCached: 0 kB' 'Active: 5372304 kB' 'Inactive: 122472 kB' 'Active(anon): 5249048 kB' 'Inactive(anon): 0 kB' 'Active(file): 123256 kB' 'Inactive(file): 122472 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4989336 kB' 'Mapped: 60216 kB' 'AnonPages: 505560 kB' 'Shmem: 4743608 kB' 'KernelStack: 11800 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 278000 kB' 'Slab: 589872 kB' 'SReclaimable: 278000 kB' 'SUnreclaim: 311872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.857 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.857 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # continue 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.858 21:08:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.858 21:08:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.858 21:08:40 -- setup/common.sh@33 -- # echo 0 00:03:25.858 21:08:40 -- setup/common.sh@33 -- # return 0 00:03:25.858 21:08:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.858 21:08:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.858 21:08:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.858 21:08:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.858 21:08:40 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:25.858 node0=512 expecting 513 00:03:25.858 21:08:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.858 21:08:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.858 21:08:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.858 21:08:40 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:25.858 node1=513 expecting 512 00:03:25.858 21:08:40 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:25.858 00:03:25.858 real 0m3.256s 00:03:25.858 user 0m1.076s 00:03:25.858 sys 0m2.055s 00:03:25.858 21:08:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:25.858 21:08:40 -- common/autotest_common.sh@10 -- # set +x 00:03:25.858 ************************************ 00:03:25.858 END TEST odd_alloc 00:03:25.858 ************************************ 00:03:25.858 21:08:40 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:25.858 21:08:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:25.858 21:08:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:25.858 21:08:40 -- common/autotest_common.sh@10 -- # set +x 00:03:26.120 ************************************ 00:03:26.120 START TEST custom_alloc 00:03:26.120 ************************************ 00:03:26.120 21:08:40 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:26.120 21:08:40 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:26.120 21:08:40 -- setup/hugepages.sh@169 -- # local node 00:03:26.120 21:08:40 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:26.120 21:08:40 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:26.120 21:08:40 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:26.120 21:08:40 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:26.120 21:08:40 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:26.120 21:08:40 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:26.120 21:08:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.120 21:08:40 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.120 21:08:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.120 21:08:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:26.120 21:08:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.120 21:08:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.120 21:08:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.120 21:08:40 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:26.120 21:08:40 -- setup/hugepages.sh@83 -- # : 256 00:03:26.120 21:08:40 -- setup/hugepages.sh@84 -- # : 1 00:03:26.120 21:08:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:26.120 21:08:40 -- setup/hugepages.sh@83 -- # : 0 00:03:26.120 21:08:40 -- setup/hugepages.sh@84 -- # : 0 00:03:26.120 21:08:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:26.120 21:08:40 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:26.120 21:08:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.120 21:08:40 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.120 21:08:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.120 21:08:40 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.120 21:08:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.120 21:08:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.120 21:08:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.120 21:08:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.120 21:08:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.120 21:08:40 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.120 21:08:40 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:26.120 21:08:40 -- setup/hugepages.sh@78 -- # return 0 00:03:26.120 21:08:40 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:26.120 21:08:40 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:26.120 21:08:40 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:26.120 21:08:40 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:26.120 21:08:40 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:26.120 21:08:40 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:26.120 21:08:40 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.120 21:08:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.120 21:08:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.120 21:08:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.120 21:08:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.120 21:08:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.120 21:08:40 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:26.120 21:08:40 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.120 21:08:40 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:26.120 21:08:40 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.120 21:08:40 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:26.120 21:08:40 -- setup/hugepages.sh@78 -- # return 0 00:03:26.120 21:08:40 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:26.120 21:08:40 -- setup/hugepages.sh@187 -- # setup output 00:03:26.120 21:08:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.120 21:08:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:28.662 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:28.662 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.662 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:28.662 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:28.662 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.662 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:28.923 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:28.923 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:28.923 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:28.923 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:28.923 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:28.923 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:28.923 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:28.923 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:28.923 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:28.923 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.923 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:28.923 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:28.923 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:29.186 21:08:44 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:29.186 21:08:44 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:29.186 21:08:44 -- setup/hugepages.sh@89 -- # local node 00:03:29.186 21:08:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.186 21:08:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.186 21:08:44 -- setup/hugepages.sh@92 -- # local surp 00:03:29.186 21:08:44 -- setup/hugepages.sh@93 -- # local resv 00:03:29.186 21:08:44 -- setup/hugepages.sh@94 -- # local anon 00:03:29.186 21:08:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.186 21:08:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.186 21:08:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.186 21:08:44 -- setup/common.sh@18 -- # local node= 00:03:29.186 21:08:44 -- setup/common.sh@19 -- # local var val 00:03:29.186 21:08:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.186 21:08:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.186 21:08:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.186 21:08:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.186 21:08:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.186 21:08:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 247884380 kB' 'MemAvailable: 247667840 kB' 'Buffers: 1308 kB' 'Cached: 8419292 kB' 'SwapCached: 0 kB' 'Active: 8646904 kB' 'Inactive: 439520 kB' 'Active(anon): 8075240 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674572 kB' 'Mapped: 204200 kB' 'Shmem: 7409416 kB' 'KReclaimable: 533208 kB' 'Slab: 1219880 kB' 'SReclaimable: 533208 kB' 'SUnreclaim: 686672 kB' 'KernelStack: 25488 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 138674828 kB' 'Committed_AS: 9695260 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330876 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.186 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.186 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.187 21:08:44 -- setup/common.sh@33 -- # echo 0 00:03:29.187 21:08:44 -- setup/common.sh@33 -- # return 0 00:03:29.187 21:08:44 -- setup/hugepages.sh@97 -- # anon=0 00:03:29.187 21:08:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.187 21:08:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.187 21:08:44 -- setup/common.sh@18 -- # local node= 00:03:29.187 21:08:44 -- setup/common.sh@19 -- # local var val 00:03:29.187 21:08:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.187 21:08:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.187 21:08:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.187 21:08:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.187 21:08:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.187 21:08:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 247885412 kB' 'MemAvailable: 247668872 kB' 'Buffers: 1308 kB' 'Cached: 8419292 kB' 'SwapCached: 0 kB' 'Active: 8646944 kB' 'Inactive: 439520 kB' 'Active(anon): 8075280 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674652 kB' 'Mapped: 204124 kB' 'Shmem: 7409416 kB' 'KReclaimable: 533208 kB' 'Slab: 1219876 kB' 'SReclaimable: 533208 kB' 'SUnreclaim: 686668 kB' 'KernelStack: 25456 kB' 'PageTables: 9340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 138674828 kB' 'Committed_AS: 9695272 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330844 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.187 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.187 21:08:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.188 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.188 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.188 21:08:44 -- setup/common.sh@33 -- # echo 0 00:03:29.188 21:08:44 -- setup/common.sh@33 -- # return 0 00:03:29.453 21:08:44 -- setup/hugepages.sh@99 -- # surp=0 00:03:29.453 21:08:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.453 21:08:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.453 21:08:44 -- setup/common.sh@18 -- # local node= 00:03:29.453 21:08:44 -- setup/common.sh@19 -- # local var val 00:03:29.453 21:08:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.453 21:08:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.453 21:08:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.453 21:08:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.453 21:08:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.453 21:08:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.453 21:08:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 247884904 kB' 'MemAvailable: 247668364 kB' 'Buffers: 1308 kB' 'Cached: 8419304 kB' 'SwapCached: 0 kB' 'Active: 8645712 kB' 'Inactive: 439520 kB' 'Active(anon): 8074048 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 673860 kB' 'Mapped: 204048 kB' 'Shmem: 7409428 kB' 'KReclaimable: 533208 kB' 'Slab: 1219888 kB' 'SReclaimable: 533208 kB' 'SUnreclaim: 686680 kB' 'KernelStack: 25520 kB' 'PageTables: 9644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 138674828 kB' 'Committed_AS: 9696808 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330908 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.453 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.453 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.454 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.454 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.455 21:08:44 -- setup/common.sh@33 -- # echo 0 00:03:29.455 21:08:44 -- setup/common.sh@33 -- # return 0 00:03:29.455 21:08:44 -- setup/hugepages.sh@100 -- # resv=0 00:03:29.455 21:08:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:29.455 nr_hugepages=1536 00:03:29.455 21:08:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.455 resv_hugepages=0 00:03:29.455 21:08:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.455 surplus_hugepages=0 00:03:29.455 21:08:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.455 anon_hugepages=0 00:03:29.455 21:08:44 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:29.455 21:08:44 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:29.455 21:08:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.455 21:08:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.455 21:08:44 -- setup/common.sh@18 -- # local node= 00:03:29.455 21:08:44 -- setup/common.sh@19 -- # local var val 00:03:29.455 21:08:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.455 21:08:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.455 21:08:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.455 21:08:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.455 21:08:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.455 21:08:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 247886492 kB' 'MemAvailable: 247669952 kB' 'Buffers: 1308 kB' 'Cached: 8419316 kB' 'SwapCached: 0 kB' 'Active: 8646448 kB' 'Inactive: 439520 kB' 'Active(anon): 8074784 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674612 kB' 'Mapped: 204040 kB' 'Shmem: 7409440 kB' 'KReclaimable: 533208 kB' 'Slab: 1219856 kB' 'SReclaimable: 533208 kB' 'SUnreclaim: 686648 kB' 'KernelStack: 25472 kB' 'PageTables: 9576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 138674828 kB' 'Committed_AS: 9696824 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330956 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.455 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.455 21:08:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.456 21:08:44 -- setup/common.sh@33 -- # echo 1536 00:03:29.456 21:08:44 -- setup/common.sh@33 -- # return 0 00:03:29.456 21:08:44 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:29.456 21:08:44 -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.456 21:08:44 -- setup/hugepages.sh@27 -- # local node 00:03:29.456 21:08:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.456 21:08:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.456 21:08:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.456 21:08:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.456 21:08:44 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.456 21:08:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.456 21:08:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.456 21:08:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.456 21:08:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.456 21:08:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.456 21:08:44 -- setup/common.sh@18 -- # local node=0 00:03:29.456 21:08:44 -- setup/common.sh@19 -- # local var val 00:03:29.456 21:08:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.456 21:08:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.456 21:08:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.456 21:08:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.456 21:08:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.456 21:08:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131767928 kB' 'MemFree: 125387824 kB' 'MemUsed: 6380104 kB' 'SwapCached: 0 kB' 'Active: 3273148 kB' 'Inactive: 317048 kB' 'Active(anon): 2824740 kB' 'Inactive(anon): 0 kB' 'Active(file): 448408 kB' 'Inactive(file): 317048 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3431256 kB' 'Mapped: 143832 kB' 'AnonPages: 168120 kB' 'Shmem: 2665800 kB' 'KernelStack: 13560 kB' 'PageTables: 4764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255208 kB' 'Slab: 630480 kB' 'SReclaimable: 255208 kB' 'SUnreclaim: 375272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.456 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.456 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@33 -- # echo 0 00:03:29.457 21:08:44 -- setup/common.sh@33 -- # return 0 00:03:29.457 21:08:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.457 21:08:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.457 21:08:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.457 21:08:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:29.457 21:08:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.457 21:08:44 -- setup/common.sh@18 -- # local node=1 00:03:29.457 21:08:44 -- setup/common.sh@19 -- # local var val 00:03:29.457 21:08:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.457 21:08:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.457 21:08:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:29.457 21:08:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:29.457 21:08:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.457 21:08:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131950252 kB' 'MemFree: 122499372 kB' 'MemUsed: 9450880 kB' 'SwapCached: 0 kB' 'Active: 5373600 kB' 'Inactive: 122472 kB' 'Active(anon): 5250344 kB' 'Inactive(anon): 0 kB' 'Active(file): 123256 kB' 'Inactive(file): 122472 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4989384 kB' 'Mapped: 60220 kB' 'AnonPages: 506704 kB' 'Shmem: 4743656 kB' 'KernelStack: 11976 kB' 'PageTables: 4792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 278000 kB' 'Slab: 589376 kB' 'SReclaimable: 278000 kB' 'SUnreclaim: 311376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.457 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.457 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # continue 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.458 21:08:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.458 21:08:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.458 21:08:44 -- setup/common.sh@33 -- # echo 0 00:03:29.458 21:08:44 -- setup/common.sh@33 -- # return 0 00:03:29.458 21:08:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.458 21:08:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.458 21:08:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.458 21:08:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.458 21:08:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:29.458 node0=512 expecting 512 00:03:29.458 21:08:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.458 21:08:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.458 21:08:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.458 21:08:44 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:29.458 node1=1024 expecting 1024 00:03:29.458 21:08:44 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:29.458 00:03:29.458 real 0m3.387s 00:03:29.458 user 0m1.167s 00:03:29.458 sys 0m2.080s 00:03:29.458 21:08:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:29.458 21:08:44 -- common/autotest_common.sh@10 -- # set +x 00:03:29.458 ************************************ 00:03:29.458 END TEST custom_alloc 00:03:29.458 ************************************ 00:03:29.458 21:08:44 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:29.458 21:08:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.458 21:08:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.458 21:08:44 -- common/autotest_common.sh@10 -- # set +x 00:03:29.459 ************************************ 00:03:29.459 START TEST no_shrink_alloc 00:03:29.459 ************************************ 00:03:29.459 21:08:44 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:29.459 21:08:44 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:29.459 21:08:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:29.459 21:08:44 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:29.459 21:08:44 -- setup/hugepages.sh@51 -- # shift 00:03:29.459 21:08:44 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:29.459 21:08:44 -- setup/hugepages.sh@52 -- # local node_ids 00:03:29.459 21:08:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.459 21:08:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:29.459 21:08:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:29.459 21:08:44 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:29.459 21:08:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.459 21:08:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.459 21:08:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.459 21:08:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.459 21:08:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.459 21:08:44 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:29.459 21:08:44 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:29.459 21:08:44 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:29.459 21:08:44 -- setup/hugepages.sh@73 -- # return 0 00:03:29.459 21:08:44 -- setup/hugepages.sh@198 -- # setup output 00:03:29.459 21:08:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.459 21:08:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:32.002 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.002 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.002 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.002 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.002 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.002 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.002 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.002 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.002 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.002 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.002 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.002 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.002 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.002 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.002 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.002 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.002 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.002 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:32.002 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:32.264 21:08:47 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:32.264 21:08:47 -- setup/hugepages.sh@89 -- # local node 00:03:32.264 21:08:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.264 21:08:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.264 21:08:47 -- setup/hugepages.sh@92 -- # local surp 00:03:32.264 21:08:47 -- setup/hugepages.sh@93 -- # local resv 00:03:32.264 21:08:47 -- setup/hugepages.sh@94 -- # local anon 00:03:32.264 21:08:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.264 21:08:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.264 21:08:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.264 21:08:47 -- setup/common.sh@18 -- # local node= 00:03:32.264 21:08:47 -- setup/common.sh@19 -- # local var val 00:03:32.264 21:08:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.264 21:08:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.264 21:08:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.264 21:08:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.264 21:08:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.264 21:08:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248910812 kB' 'MemAvailable: 248694272 kB' 'Buffers: 1308 kB' 'Cached: 8419424 kB' 'SwapCached: 0 kB' 'Active: 8646936 kB' 'Inactive: 439520 kB' 'Active(anon): 8075272 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 674452 kB' 'Mapped: 204180 kB' 'Shmem: 7409548 kB' 'KReclaimable: 533208 kB' 'Slab: 1220548 kB' 'SReclaimable: 533208 kB' 'SUnreclaim: 687340 kB' 'KernelStack: 25520 kB' 'PageTables: 9420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9694880 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 331004 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.264 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.264 21:08:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.265 21:08:47 -- setup/common.sh@33 -- # echo 0 00:03:32.265 21:08:47 -- setup/common.sh@33 -- # return 0 00:03:32.265 21:08:47 -- setup/hugepages.sh@97 -- # anon=0 00:03:32.265 21:08:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.265 21:08:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.265 21:08:47 -- setup/common.sh@18 -- # local node= 00:03:32.265 21:08:47 -- setup/common.sh@19 -- # local var val 00:03:32.265 21:08:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.265 21:08:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.265 21:08:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.265 21:08:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.265 21:08:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.265 21:08:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248914336 kB' 'MemAvailable: 248697796 kB' 'Buffers: 1308 kB' 'Cached: 8419424 kB' 'SwapCached: 0 kB' 'Active: 8648232 kB' 'Inactive: 439520 kB' 'Active(anon): 8076568 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 675940 kB' 'Mapped: 204180 kB' 'Shmem: 7409548 kB' 'KReclaimable: 533208 kB' 'Slab: 1220528 kB' 'SReclaimable: 533208 kB' 'SUnreclaim: 687320 kB' 'KernelStack: 25552 kB' 'PageTables: 9536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9698300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330988 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.265 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.265 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.266 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.266 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.267 21:08:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.267 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.267 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.267 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.267 21:08:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.267 21:08:47 -- setup/common.sh@33 -- # echo 0 00:03:32.267 21:08:47 -- setup/common.sh@33 -- # return 0 00:03:32.267 21:08:47 -- setup/hugepages.sh@99 -- # surp=0 00:03:32.267 21:08:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.267 21:08:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.267 21:08:47 -- setup/common.sh@18 -- # local node= 00:03:32.267 21:08:47 -- setup/common.sh@19 -- # local var val 00:03:32.267 21:08:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.267 21:08:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.267 21:08:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.267 21:08:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.267 21:08:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.267 21:08:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.267 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.267 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.267 21:08:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248911084 kB' 'MemAvailable: 248694544 kB' 'Buffers: 1308 kB' 'Cached: 8419424 kB' 'SwapCached: 0 kB' 'Active: 8652020 kB' 'Inactive: 439520 kB' 'Active(anon): 8080356 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 679848 kB' 'Mapped: 204668 kB' 'Shmem: 7409548 kB' 'KReclaimable: 533208 kB' 'Slab: 1220528 kB' 'SReclaimable: 533208 kB' 'SUnreclaim: 687320 kB' 'KernelStack: 25536 kB' 'PageTables: 9516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9703064 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 331020 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:32.267 21:08:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.267 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.267 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.267 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.267 21:08:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.267 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.267 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.267 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.267 21:08:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.267 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.267 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.267 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.267 21:08:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.267 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.528 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.528 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 21:08:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.529 21:08:47 -- setup/common.sh@33 -- # echo 0 00:03:32.529 21:08:47 -- setup/common.sh@33 -- # return 0 00:03:32.529 21:08:47 -- setup/hugepages.sh@100 -- # resv=0 00:03:32.529 21:08:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.529 nr_hugepages=1024 00:03:32.529 21:08:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.529 resv_hugepages=0 00:03:32.529 21:08:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.529 surplus_hugepages=0 00:03:32.529 21:08:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.529 anon_hugepages=0 00:03:32.529 21:08:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.529 21:08:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.529 21:08:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.529 21:08:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.529 21:08:47 -- setup/common.sh@18 -- # local node= 00:03:32.529 21:08:47 -- setup/common.sh@19 -- # local var val 00:03:32.529 21:08:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.529 21:08:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.530 21:08:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.530 21:08:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.530 21:08:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.530 21:08:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248907116 kB' 'MemAvailable: 248690576 kB' 'Buffers: 1308 kB' 'Cached: 8419452 kB' 'SwapCached: 0 kB' 'Active: 8653416 kB' 'Inactive: 439520 kB' 'Active(anon): 8081752 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 681660 kB' 'Mapped: 204852 kB' 'Shmem: 7409576 kB' 'KReclaimable: 533208 kB' 'Slab: 1220500 kB' 'SReclaimable: 533208 kB' 'SUnreclaim: 687292 kB' 'KernelStack: 25456 kB' 'PageTables: 9272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9703940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 330944 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.530 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.531 21:08:47 -- setup/common.sh@33 -- # echo 1024 00:03:32.531 21:08:47 -- setup/common.sh@33 -- # return 0 00:03:32.531 21:08:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.531 21:08:47 -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.531 21:08:47 -- setup/hugepages.sh@27 -- # local node 00:03:32.531 21:08:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.531 21:08:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.531 21:08:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.531 21:08:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.531 21:08:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.531 21:08:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.531 21:08:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.531 21:08:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.531 21:08:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.531 21:08:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.531 21:08:47 -- setup/common.sh@18 -- # local node=0 00:03:32.531 21:08:47 -- setup/common.sh@19 -- # local var val 00:03:32.531 21:08:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.531 21:08:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.531 21:08:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.531 21:08:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.531 21:08:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.531 21:08:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131767928 kB' 'MemFree: 124306964 kB' 'MemUsed: 7460964 kB' 'SwapCached: 0 kB' 'Active: 3273480 kB' 'Inactive: 317048 kB' 'Active(anon): 2825072 kB' 'Inactive(anon): 0 kB' 'Active(file): 448408 kB' 'Inactive(file): 317048 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3431344 kB' 'Mapped: 144016 kB' 'AnonPages: 168412 kB' 'Shmem: 2665888 kB' 'KernelStack: 13688 kB' 'PageTables: 4904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255208 kB' 'Slab: 631336 kB' 'SReclaimable: 255208 kB' 'SUnreclaim: 376128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 21:08:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # continue 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.532 21:08:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.532 21:08:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.532 21:08:47 -- setup/common.sh@33 -- # echo 0 00:03:32.532 21:08:47 -- setup/common.sh@33 -- # return 0 00:03:32.532 21:08:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.532 21:08:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.532 21:08:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.532 21:08:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.532 21:08:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:32.532 node0=1024 expecting 1024 00:03:32.532 21:08:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:32.532 21:08:47 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:32.532 21:08:47 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:32.532 21:08:47 -- setup/hugepages.sh@202 -- # setup output 00:03:32.532 21:08:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.532 21:08:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:35.075 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.075 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:35.075 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:35.075 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:35.075 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.075 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:35.075 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:35.075 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:35.075 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:35.075 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:35.075 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:35.075 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:35.075 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:35.075 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:35.075 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:35.075 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.075 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:35.075 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:35.075 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:35.336 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:35.336 21:08:50 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:35.336 21:08:50 -- setup/hugepages.sh@89 -- # local node 00:03:35.336 21:08:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.336 21:08:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.336 21:08:50 -- setup/hugepages.sh@92 -- # local surp 00:03:35.336 21:08:50 -- setup/hugepages.sh@93 -- # local resv 00:03:35.336 21:08:50 -- setup/hugepages.sh@94 -- # local anon 00:03:35.336 21:08:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.336 21:08:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.336 21:08:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.336 21:08:50 -- setup/common.sh@18 -- # local node= 00:03:35.336 21:08:50 -- setup/common.sh@19 -- # local var val 00:03:35.336 21:08:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.336 21:08:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.336 21:08:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.336 21:08:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.336 21:08:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.336 21:08:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.336 21:08:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248921452 kB' 'MemAvailable: 248704880 kB' 'Buffers: 1308 kB' 'Cached: 8419548 kB' 'SwapCached: 0 kB' 'Active: 8651392 kB' 'Inactive: 439520 kB' 'Active(anon): 8079728 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 678584 kB' 'Mapped: 204232 kB' 'Shmem: 7409672 kB' 'KReclaimable: 533144 kB' 'Slab: 1219924 kB' 'SReclaimable: 533144 kB' 'SUnreclaim: 686780 kB' 'KernelStack: 25520 kB' 'PageTables: 9416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9695644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 331020 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.336 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.336 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.337 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.337 21:08:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.337 21:08:50 -- setup/common.sh@33 -- # echo 0 00:03:35.337 21:08:50 -- setup/common.sh@33 -- # return 0 00:03:35.601 21:08:50 -- setup/hugepages.sh@97 -- # anon=0 00:03:35.601 21:08:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.601 21:08:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.601 21:08:50 -- setup/common.sh@18 -- # local node= 00:03:35.601 21:08:50 -- setup/common.sh@19 -- # local var val 00:03:35.601 21:08:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.601 21:08:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.601 21:08:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.601 21:08:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.601 21:08:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.601 21:08:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.601 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.601 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248924216 kB' 'MemAvailable: 248707644 kB' 'Buffers: 1308 kB' 'Cached: 8419548 kB' 'SwapCached: 0 kB' 'Active: 8650940 kB' 'Inactive: 439520 kB' 'Active(anon): 8079276 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 678640 kB' 'Mapped: 204204 kB' 'Shmem: 7409672 kB' 'KReclaimable: 533144 kB' 'Slab: 1219904 kB' 'SReclaimable: 533144 kB' 'SUnreclaim: 686760 kB' 'KernelStack: 25488 kB' 'PageTables: 9300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9695656 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 331004 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.602 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.602 21:08:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.603 21:08:50 -- setup/common.sh@33 -- # echo 0 00:03:35.603 21:08:50 -- setup/common.sh@33 -- # return 0 00:03:35.603 21:08:50 -- setup/hugepages.sh@99 -- # surp=0 00:03:35.603 21:08:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.603 21:08:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.603 21:08:50 -- setup/common.sh@18 -- # local node= 00:03:35.603 21:08:50 -- setup/common.sh@19 -- # local var val 00:03:35.603 21:08:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.603 21:08:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.603 21:08:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.603 21:08:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.603 21:08:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.603 21:08:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248924336 kB' 'MemAvailable: 248707764 kB' 'Buffers: 1308 kB' 'Cached: 8419560 kB' 'SwapCached: 0 kB' 'Active: 8650704 kB' 'Inactive: 439520 kB' 'Active(anon): 8079040 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 678872 kB' 'Mapped: 204100 kB' 'Shmem: 7409684 kB' 'KReclaimable: 533144 kB' 'Slab: 1219892 kB' 'SReclaimable: 533144 kB' 'SUnreclaim: 686748 kB' 'KernelStack: 25472 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9696828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 331004 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.603 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.603 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.604 21:08:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.604 21:08:50 -- setup/common.sh@33 -- # echo 0 00:03:35.604 21:08:50 -- setup/common.sh@33 -- # return 0 00:03:35.604 21:08:50 -- setup/hugepages.sh@100 -- # resv=0 00:03:35.604 21:08:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.604 nr_hugepages=1024 00:03:35.604 21:08:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.604 resv_hugepages=0 00:03:35.604 21:08:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.604 surplus_hugepages=0 00:03:35.604 21:08:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.604 anon_hugepages=0 00:03:35.604 21:08:50 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.604 21:08:50 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.604 21:08:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.604 21:08:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.604 21:08:50 -- setup/common.sh@18 -- # local node= 00:03:35.604 21:08:50 -- setup/common.sh@19 -- # local var val 00:03:35.604 21:08:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.604 21:08:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.604 21:08:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.604 21:08:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.604 21:08:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.604 21:08:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.604 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 263718180 kB' 'MemFree: 248925316 kB' 'MemAvailable: 248708744 kB' 'Buffers: 1308 kB' 'Cached: 8419560 kB' 'SwapCached: 0 kB' 'Active: 8650512 kB' 'Inactive: 439520 kB' 'Active(anon): 8078848 kB' 'Inactive(anon): 0 kB' 'Active(file): 571664 kB' 'Inactive(file): 439520 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 678696 kB' 'Mapped: 204100 kB' 'Shmem: 7409684 kB' 'KReclaimable: 533144 kB' 'Slab: 1219888 kB' 'SReclaimable: 533144 kB' 'SUnreclaim: 686744 kB' 'KernelStack: 25456 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 139199116 kB' 'Committed_AS: 9698368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 331052 kB' 'VmallocChunk: 0 kB' 'Percpu: 163840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3289152 kB' 'DirectMap2M: 22702080 kB' 'DirectMap1G: 244318208 kB' 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.605 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.605 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.606 21:08:50 -- setup/common.sh@33 -- # echo 1024 00:03:35.606 21:08:50 -- setup/common.sh@33 -- # return 0 00:03:35.606 21:08:50 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.606 21:08:50 -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.606 21:08:50 -- setup/hugepages.sh@27 -- # local node 00:03:35.606 21:08:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.606 21:08:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.606 21:08:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.606 21:08:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:35.606 21:08:50 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.606 21:08:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.606 21:08:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.606 21:08:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.606 21:08:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.606 21:08:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.606 21:08:50 -- setup/common.sh@18 -- # local node=0 00:03:35.606 21:08:50 -- setup/common.sh@19 -- # local var val 00:03:35.606 21:08:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.606 21:08:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.606 21:08:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.606 21:08:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.606 21:08:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.606 21:08:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131767928 kB' 'MemFree: 124325292 kB' 'MemUsed: 7442636 kB' 'SwapCached: 0 kB' 'Active: 3275200 kB' 'Inactive: 317048 kB' 'Active(anon): 2826792 kB' 'Inactive(anon): 0 kB' 'Active(file): 448408 kB' 'Inactive(file): 317048 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3431368 kB' 'Mapped: 143884 kB' 'AnonPages: 170088 kB' 'Shmem: 2665912 kB' 'KernelStack: 13544 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 255144 kB' 'Slab: 630300 kB' 'SReclaimable: 255144 kB' 'SUnreclaim: 375156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.606 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.606 21:08:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # continue 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.607 21:08:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.607 21:08:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.607 21:08:50 -- setup/common.sh@33 -- # echo 0 00:03:35.607 21:08:50 -- setup/common.sh@33 -- # return 0 00:03:35.607 21:08:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.607 21:08:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.607 21:08:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.607 21:08:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.607 21:08:50 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:35.607 node0=1024 expecting 1024 00:03:35.607 21:08:50 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:35.607 00:03:35.607 real 0m6.025s 00:03:35.607 user 0m2.049s 00:03:35.607 sys 0m3.655s 00:03:35.607 21:08:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.607 21:08:50 -- common/autotest_common.sh@10 -- # set +x 00:03:35.607 ************************************ 00:03:35.607 END TEST no_shrink_alloc 00:03:35.607 ************************************ 00:03:35.607 21:08:50 -- setup/hugepages.sh@217 -- # clear_hp 00:03:35.607 21:08:50 -- setup/hugepages.sh@37 -- # local node hp 00:03:35.607 21:08:50 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:35.607 21:08:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.607 21:08:50 -- setup/hugepages.sh@41 -- # echo 0 00:03:35.607 21:08:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.607 21:08:50 -- setup/hugepages.sh@41 -- # echo 0 00:03:35.607 21:08:50 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:35.607 21:08:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.607 21:08:50 -- setup/hugepages.sh@41 -- # echo 0 00:03:35.607 21:08:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:35.607 21:08:50 -- setup/hugepages.sh@41 -- # echo 0 00:03:35.607 21:08:50 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:35.607 21:08:50 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:35.607 00:03:35.607 real 0m25.491s 00:03:35.607 user 0m7.996s 00:03:35.607 sys 0m14.235s 00:03:35.607 21:08:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:35.607 21:08:50 -- common/autotest_common.sh@10 -- # set +x 00:03:35.607 ************************************ 00:03:35.607 END TEST hugepages 00:03:35.607 ************************************ 00:03:35.607 21:08:50 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:03:35.607 21:08:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.607 21:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.607 21:08:50 -- common/autotest_common.sh@10 -- # set +x 00:03:35.869 ************************************ 00:03:35.869 START TEST driver 00:03:35.869 ************************************ 00:03:35.869 21:08:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:03:35.869 * Looking for test storage... 00:03:35.869 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:35.869 21:08:50 -- setup/driver.sh@68 -- # setup reset 00:03:35.869 21:08:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.869 21:08:50 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.156 21:08:55 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:41.156 21:08:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:41.156 21:08:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:41.156 21:08:55 -- common/autotest_common.sh@10 -- # set +x 00:03:41.156 ************************************ 00:03:41.156 START TEST guess_driver 00:03:41.156 ************************************ 00:03:41.156 21:08:55 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:41.156 21:08:55 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:41.156 21:08:55 -- setup/driver.sh@47 -- # local fail=0 00:03:41.156 21:08:55 -- setup/driver.sh@49 -- # pick_driver 00:03:41.156 21:08:55 -- setup/driver.sh@36 -- # vfio 00:03:41.156 21:08:55 -- setup/driver.sh@21 -- # local iommu_grups 00:03:41.156 21:08:55 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:41.156 21:08:55 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:41.156 21:08:55 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:41.156 21:08:55 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:41.156 21:08:55 -- setup/driver.sh@29 -- # (( 335 > 0 )) 00:03:41.156 21:08:55 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:41.156 21:08:55 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:41.156 21:08:55 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:41.156 21:08:55 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:41.156 21:08:55 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:41.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:41.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:41.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:41.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:41.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:41.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:41.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:41.156 21:08:55 -- setup/driver.sh@30 -- # return 0 00:03:41.156 21:08:55 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:41.156 21:08:55 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:41.156 21:08:55 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:41.156 21:08:55 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:41.156 Looking for driver=vfio-pci 00:03:41.156 21:08:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:41.156 21:08:55 -- setup/driver.sh@45 -- # setup output config 00:03:41.156 21:08:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.156 21:08:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:44.457 21:08:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.457 21:08:58 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.457 21:08:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.457 21:08:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.457 21:08:58 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.457 21:08:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.457 21:08:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.457 21:08:58 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.457 21:08:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.457 21:08:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.457 21:08:58 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.457 21:08:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.457 21:08:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.457 21:08:58 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.457 21:08:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.457 21:08:58 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.457 21:08:58 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.457 21:08:58 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.457 21:08:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.457 21:08:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.457 21:08:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.457 21:08:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.457 21:08:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.457 21:08:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.457 21:08:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.457 21:08:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.457 21:08:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.457 21:08:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.457 21:08:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.457 21:08:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.457 21:08:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.457 21:08:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.457 21:08:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.457 21:08:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.457 21:08:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.457 21:08:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.457 21:08:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.457 21:08:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.457 21:08:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.457 21:08:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.458 21:08:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.458 21:08:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.458 21:08:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.458 21:08:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.458 21:08:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.458 21:08:59 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:44.458 21:08:59 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:44.458 21:08:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.841 21:09:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.841 21:09:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.841 21:09:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.101 21:09:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:46.101 21:09:00 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:46.101 21:09:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.360 21:09:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:46.360 21:09:01 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:46.360 21:09:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.929 21:09:01 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:46.929 21:09:01 -- setup/driver.sh@65 -- # setup reset 00:03:46.929 21:09:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.929 21:09:01 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.218 00:03:52.218 real 0m10.946s 00:03:52.218 user 0m2.224s 00:03:52.218 sys 0m4.436s 00:03:52.218 21:09:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:52.218 21:09:06 -- common/autotest_common.sh@10 -- # set +x 00:03:52.218 ************************************ 00:03:52.218 END TEST guess_driver 00:03:52.218 ************************************ 00:03:52.218 00:03:52.218 real 0m16.352s 00:03:52.218 user 0m3.308s 00:03:52.218 sys 0m6.797s 00:03:52.218 21:09:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:52.218 21:09:06 -- common/autotest_common.sh@10 -- # set +x 00:03:52.218 ************************************ 00:03:52.218 END TEST driver 00:03:52.218 ************************************ 00:03:52.218 21:09:06 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:52.218 21:09:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.218 21:09:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.218 21:09:06 -- common/autotest_common.sh@10 -- # set +x 00:03:52.218 ************************************ 00:03:52.219 START TEST devices 00:03:52.219 ************************************ 00:03:52.219 21:09:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:52.219 * Looking for test storage... 00:03:52.219 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:52.219 21:09:07 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:52.219 21:09:07 -- setup/devices.sh@192 -- # setup reset 00:03:52.219 21:09:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.219 21:09:07 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.470 21:09:10 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:56.470 21:09:10 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:56.470 21:09:10 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:56.470 21:09:10 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:56.470 21:09:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:56.470 21:09:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:56.470 21:09:10 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:56.470 21:09:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.470 21:09:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:56.471 21:09:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:56.471 21:09:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:56.471 21:09:10 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:56.471 21:09:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:56.471 21:09:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:56.471 21:09:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:56.471 21:09:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:03:56.471 21:09:10 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:03:56.471 21:09:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:56.471 21:09:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:56.471 21:09:10 -- setup/devices.sh@196 -- # blocks=() 00:03:56.471 21:09:10 -- setup/devices.sh@196 -- # declare -a blocks 00:03:56.471 21:09:10 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:56.471 21:09:10 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:56.471 21:09:10 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:56.471 21:09:10 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:56.471 21:09:10 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:56.471 21:09:10 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:56.471 21:09:10 -- setup/devices.sh@202 -- # pci=0000:c9:00.0 00:03:56.471 21:09:10 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:03:56.471 21:09:10 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:56.471 21:09:10 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:56.471 21:09:10 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:56.471 No valid GPT data, bailing 00:03:56.471 21:09:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.471 21:09:10 -- scripts/common.sh@391 -- # pt= 00:03:56.471 21:09:10 -- scripts/common.sh@392 -- # return 1 00:03:56.471 21:09:10 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:56.471 21:09:10 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:56.471 21:09:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:56.471 21:09:10 -- setup/common.sh@80 -- # echo 2000398934016 00:03:56.471 21:09:10 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:03:56.471 21:09:10 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:56.471 21:09:10 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:c9:00.0 00:03:56.471 21:09:10 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:56.471 21:09:10 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:56.471 21:09:10 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:56.471 21:09:10 -- setup/devices.sh@202 -- # pci=0000:cb:00.0 00:03:56.471 21:09:10 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\b\:\0\0\.\0* ]] 00:03:56.471 21:09:10 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:56.471 21:09:10 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:56.471 21:09:10 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:03:56.471 No valid GPT data, bailing 00:03:56.471 21:09:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:56.471 21:09:10 -- scripts/common.sh@391 -- # pt= 00:03:56.471 21:09:10 -- scripts/common.sh@392 -- # return 1 00:03:56.471 21:09:10 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:56.471 21:09:10 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:56.471 21:09:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:56.471 21:09:10 -- setup/common.sh@80 -- # echo 2000398934016 00:03:56.471 21:09:10 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:03:56.471 21:09:10 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:56.471 21:09:10 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:cb:00.0 00:03:56.471 21:09:10 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:56.471 21:09:10 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:03:56.471 21:09:10 -- setup/devices.sh@201 -- # ctrl=nvme2 00:03:56.471 21:09:10 -- setup/devices.sh@202 -- # pci=0000:ca:00.0 00:03:56.471 21:09:10 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\a\:\0\0\.\0* ]] 00:03:56.471 21:09:10 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:03:56.471 21:09:10 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:03:56.471 21:09:10 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:03:56.471 No valid GPT data, bailing 00:03:56.471 21:09:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:56.471 21:09:10 -- scripts/common.sh@391 -- # pt= 00:03:56.471 21:09:10 -- scripts/common.sh@392 -- # return 1 00:03:56.471 21:09:10 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:03:56.471 21:09:10 -- setup/common.sh@76 -- # local dev=nvme2n1 00:03:56.471 21:09:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:03:56.471 21:09:10 -- setup/common.sh@80 -- # echo 2000398934016 00:03:56.471 21:09:10 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:03:56.471 21:09:10 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:56.471 21:09:10 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:ca:00.0 00:03:56.471 21:09:10 -- setup/devices.sh@209 -- # (( 3 > 0 )) 00:03:56.471 21:09:10 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:56.471 21:09:10 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:56.471 21:09:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.471 21:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.471 21:09:10 -- common/autotest_common.sh@10 -- # set +x 00:03:56.471 ************************************ 00:03:56.471 START TEST nvme_mount 00:03:56.471 ************************************ 00:03:56.471 21:09:10 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:56.471 21:09:10 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:56.471 21:09:10 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:56.471 21:09:10 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.471 21:09:10 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.471 21:09:10 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:56.471 21:09:10 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:56.471 21:09:10 -- setup/common.sh@40 -- # local part_no=1 00:03:56.471 21:09:10 -- setup/common.sh@41 -- # local size=1073741824 00:03:56.471 21:09:10 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:56.471 21:09:10 -- setup/common.sh@44 -- # parts=() 00:03:56.471 21:09:10 -- setup/common.sh@44 -- # local parts 00:03:56.471 21:09:10 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:56.471 21:09:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.471 21:09:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.471 21:09:10 -- setup/common.sh@46 -- # (( part++ )) 00:03:56.471 21:09:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.471 21:09:10 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:56.471 21:09:10 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:56.471 21:09:10 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:57.042 Creating new GPT entries in memory. 00:03:57.042 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:57.042 other utilities. 00:03:57.042 21:09:11 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:57.042 21:09:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.042 21:09:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.042 21:09:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.042 21:09:11 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:57.984 Creating new GPT entries in memory. 00:03:57.984 The operation has completed successfully. 00:03:57.984 21:09:12 -- setup/common.sh@57 -- # (( part++ )) 00:03:57.984 21:09:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.984 21:09:12 -- setup/common.sh@62 -- # wait 984579 00:03:57.984 21:09:12 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.984 21:09:12 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:57.984 21:09:12 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.984 21:09:12 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:57.984 21:09:12 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:57.984 21:09:12 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.245 21:09:12 -- setup/devices.sh@105 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.245 21:09:12 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:58.245 21:09:12 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:58.245 21:09:12 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.245 21:09:12 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.245 21:09:12 -- setup/devices.sh@53 -- # local found=0 00:03:58.245 21:09:12 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.245 21:09:12 -- setup/devices.sh@56 -- # : 00:03:58.245 21:09:12 -- setup/devices.sh@59 -- # local pci status 00:03:58.245 21:09:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.245 21:09:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:58.246 21:09:12 -- setup/devices.sh@47 -- # setup output config 00:03:58.246 21:09:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.246 21:09:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:00.923 21:09:15 -- setup/devices.sh@63 -- # found=1 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:cb:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.923 21:09:15 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:00.923 21:09:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.184 21:09:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.184 21:09:16 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:01.184 21:09:16 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.184 21:09:16 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.184 21:09:16 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.184 21:09:16 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:01.184 21:09:16 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.184 21:09:16 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.184 21:09:16 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.184 21:09:16 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:01.184 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.184 21:09:16 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.184 21:09:16 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.444 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:01.444 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:01.444 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:01.444 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:01.444 21:09:16 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:01.444 21:09:16 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:01.444 21:09:16 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.444 21:09:16 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:01.444 21:09:16 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:01.444 21:09:16 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.705 21:09:16 -- setup/devices.sh@116 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.705 21:09:16 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:04:01.705 21:09:16 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:01.705 21:09:16 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.705 21:09:16 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.705 21:09:16 -- setup/devices.sh@53 -- # local found=0 00:04:01.705 21:09:16 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.705 21:09:16 -- setup/devices.sh@56 -- # : 00:04:01.705 21:09:16 -- setup/devices.sh@59 -- # local pci status 00:04:01.705 21:09:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.705 21:09:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:04:01.705 21:09:16 -- setup/devices.sh@47 -- # setup output config 00:04:01.705 21:09:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.705 21:09:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:04.256 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.256 21:09:19 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:04.256 21:09:19 -- setup/devices.sh@63 -- # found=1 00:04:04.256 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.256 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.256 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.256 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:cb:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.256 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.524 21:09:19 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:04.524 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.094 21:09:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.094 21:09:19 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:05.094 21:09:19 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.094 21:09:19 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.094 21:09:19 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.094 21:09:19 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.094 21:09:19 -- setup/devices.sh@125 -- # verify 0000:c9:00.0 data@nvme0n1 '' '' 00:04:05.094 21:09:19 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:04:05.094 21:09:19 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:05.094 21:09:19 -- setup/devices.sh@50 -- # local mount_point= 00:04:05.094 21:09:19 -- setup/devices.sh@51 -- # local test_file= 00:04:05.094 21:09:19 -- setup/devices.sh@53 -- # local found=0 00:04:05.094 21:09:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:05.094 21:09:19 -- setup/devices.sh@59 -- # local pci status 00:04:05.094 21:09:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.094 21:09:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:04:05.094 21:09:19 -- setup/devices.sh@47 -- # setup output config 00:04:05.094 21:09:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.094 21:09:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:07.641 21:09:22 -- setup/devices.sh@63 -- # found=1 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:cb:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.641 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.641 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.902 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.902 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.902 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.902 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.902 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.902 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.902 21:09:22 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:07.902 21:09:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.183 21:09:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.183 21:09:23 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:08.183 21:09:23 -- setup/devices.sh@68 -- # return 0 00:04:08.183 21:09:23 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:08.183 21:09:23 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.183 21:09:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.183 21:09:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.183 21:09:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:08.183 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:08.183 00:04:08.183 real 0m12.291s 00:04:08.183 user 0m3.234s 00:04:08.183 sys 0m6.258s 00:04:08.183 21:09:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.183 21:09:23 -- common/autotest_common.sh@10 -- # set +x 00:04:08.183 ************************************ 00:04:08.183 END TEST nvme_mount 00:04:08.183 ************************************ 00:04:08.454 21:09:23 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:08.454 21:09:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.454 21:09:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.454 21:09:23 -- common/autotest_common.sh@10 -- # set +x 00:04:08.454 ************************************ 00:04:08.454 START TEST dm_mount 00:04:08.454 ************************************ 00:04:08.454 21:09:23 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:08.454 21:09:23 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:08.454 21:09:23 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:08.454 21:09:23 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:08.454 21:09:23 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:08.454 21:09:23 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:08.454 21:09:23 -- setup/common.sh@40 -- # local part_no=2 00:04:08.454 21:09:23 -- setup/common.sh@41 -- # local size=1073741824 00:04:08.454 21:09:23 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:08.454 21:09:23 -- setup/common.sh@44 -- # parts=() 00:04:08.454 21:09:23 -- setup/common.sh@44 -- # local parts 00:04:08.454 21:09:23 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:08.454 21:09:23 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:08.454 21:09:23 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:08.454 21:09:23 -- setup/common.sh@46 -- # (( part++ )) 00:04:08.454 21:09:23 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:08.454 21:09:23 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:08.454 21:09:23 -- setup/common.sh@46 -- # (( part++ )) 00:04:08.454 21:09:23 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:08.454 21:09:23 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:08.454 21:09:23 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:08.454 21:09:23 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:09.399 Creating new GPT entries in memory. 00:04:09.399 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:09.399 other utilities. 00:04:09.399 21:09:24 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:09.399 21:09:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:09.399 21:09:24 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:09.399 21:09:24 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:09.399 21:09:24 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:10.341 Creating new GPT entries in memory. 00:04:10.341 The operation has completed successfully. 00:04:10.341 21:09:25 -- setup/common.sh@57 -- # (( part++ )) 00:04:10.341 21:09:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.341 21:09:25 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.341 21:09:25 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.341 21:09:25 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:11.726 The operation has completed successfully. 00:04:11.726 21:09:26 -- setup/common.sh@57 -- # (( part++ )) 00:04:11.726 21:09:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.726 21:09:26 -- setup/common.sh@62 -- # wait 989639 00:04:11.726 21:09:26 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:11.726 21:09:26 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:11.726 21:09:26 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:11.726 21:09:26 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:11.726 21:09:26 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:11.726 21:09:26 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:11.726 21:09:26 -- setup/devices.sh@161 -- # break 00:04:11.726 21:09:26 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:11.726 21:09:26 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:11.726 21:09:26 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:11.726 21:09:26 -- setup/devices.sh@166 -- # dm=dm-0 00:04:11.726 21:09:26 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:11.726 21:09:26 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:11.726 21:09:26 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:11.726 21:09:26 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount size= 00:04:11.726 21:09:26 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:11.726 21:09:26 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:11.726 21:09:26 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:11.726 21:09:26 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:11.726 21:09:26 -- setup/devices.sh@174 -- # verify 0000:c9:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:11.726 21:09:26 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:04:11.726 21:09:26 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:11.726 21:09:26 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:11.726 21:09:26 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:11.726 21:09:26 -- setup/devices.sh@53 -- # local found=0 00:04:11.726 21:09:26 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:11.726 21:09:26 -- setup/devices.sh@56 -- # : 00:04:11.726 21:09:26 -- setup/devices.sh@59 -- # local pci status 00:04:11.726 21:09:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.726 21:09:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:04:11.726 21:09:26 -- setup/devices.sh@47 -- # setup output config 00:04:11.726 21:09:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.726 21:09:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:14.270 21:09:28 -- setup/devices.sh@63 -- # found=1 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:cb:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:28 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:29 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:29 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:29 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.270 21:09:29 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:14.270 21:09:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.839 21:09:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.839 21:09:29 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:14.839 21:09:29 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:14.839 21:09:29 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:14.839 21:09:29 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:14.839 21:09:29 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:14.839 21:09:29 -- setup/devices.sh@184 -- # verify 0000:c9:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:14.839 21:09:29 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:04:14.839 21:09:29 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:14.839 21:09:29 -- setup/devices.sh@50 -- # local mount_point= 00:04:14.839 21:09:29 -- setup/devices.sh@51 -- # local test_file= 00:04:14.839 21:09:29 -- setup/devices.sh@53 -- # local found=0 00:04:14.839 21:09:29 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:14.839 21:09:29 -- setup/devices.sh@59 -- # local pci status 00:04:14.839 21:09:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.839 21:09:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:04:14.839 21:09:29 -- setup/devices.sh@47 -- # setup output config 00:04:14.839 21:09:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.839 21:09:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:17.383 21:09:32 -- setup/devices.sh@63 -- # found=1 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:cb:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.383 21:09:32 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:17.383 21:09:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.951 21:09:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.951 21:09:32 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:17.951 21:09:32 -- setup/devices.sh@68 -- # return 0 00:04:17.951 21:09:32 -- setup/devices.sh@187 -- # cleanup_dm 00:04:17.951 21:09:32 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:17.951 21:09:32 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:17.951 21:09:32 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:17.951 21:09:32 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.951 21:09:32 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:17.951 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:17.951 21:09:32 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:17.951 21:09:32 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:17.951 00:04:17.951 real 0m9.604s 00:04:17.951 user 0m2.116s 00:04:17.951 sys 0m4.060s 00:04:17.951 21:09:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:17.951 21:09:32 -- common/autotest_common.sh@10 -- # set +x 00:04:17.951 ************************************ 00:04:17.951 END TEST dm_mount 00:04:17.951 ************************************ 00:04:17.951 21:09:32 -- setup/devices.sh@1 -- # cleanup 00:04:17.951 21:09:32 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:17.951 21:09:32 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.951 21:09:32 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.951 21:09:32 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:18.211 21:09:32 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.211 21:09:32 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.470 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:18.470 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:18.470 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:18.470 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:18.470 21:09:33 -- setup/devices.sh@12 -- # cleanup_dm 00:04:18.470 21:09:33 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:18.470 21:09:33 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:18.470 21:09:33 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.470 21:09:33 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:18.470 21:09:33 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.470 21:09:33 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:18.470 00:04:18.470 real 0m26.107s 00:04:18.470 user 0m6.692s 00:04:18.470 sys 0m12.812s 00:04:18.470 21:09:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:18.470 21:09:33 -- common/autotest_common.sh@10 -- # set +x 00:04:18.470 ************************************ 00:04:18.470 END TEST devices 00:04:18.470 ************************************ 00:04:18.470 00:04:18.470 real 1m34.482s 00:04:18.470 user 0m24.712s 00:04:18.470 sys 0m46.765s 00:04:18.470 21:09:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:18.470 21:09:33 -- common/autotest_common.sh@10 -- # set +x 00:04:18.470 ************************************ 00:04:18.470 END TEST setup.sh 00:04:18.470 ************************************ 00:04:18.470 21:09:33 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:04:21.014 Hugepages 00:04:21.014 node hugesize free / total 00:04:21.014 node0 1048576kB 0 / 0 00:04:21.014 node0 2048kB 2048 / 2048 00:04:21.014 node1 1048576kB 0 / 0 00:04:21.014 node1 2048kB 0 / 0 00:04:21.014 00:04:21.014 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:21.014 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:04:21.014 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:04:21.014 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:04:21.014 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:04:21.014 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:04:21.014 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:04:21.014 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:04:21.014 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:04:21.014 NVMe 0000:c9:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:21.014 NVMe 0000:ca:00.0 8086 0a54 1 nvme nvme2 nvme2n1 00:04:21.014 NVMe 0000:cb:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:04:21.275 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:04:21.275 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:04:21.275 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:04:21.275 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:04:21.275 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:04:21.275 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:04:21.275 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:04:21.275 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:04:21.275 21:09:36 -- spdk/autotest.sh@130 -- # uname -s 00:04:21.275 21:09:36 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:21.275 21:09:36 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:21.275 21:09:36 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:04:23.821 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:23.821 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:23.821 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:23.821 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:04:23.821 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:23.821 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:04:23.821 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:23.821 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:04:23.821 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:23.821 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:04:24.082 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:04:24.082 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:04:24.082 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:24.082 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:04:24.082 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:24.082 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:04:25.481 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:04:25.742 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:04:26.003 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:04:26.575 21:09:41 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:27.516 21:09:42 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:27.516 21:09:42 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:27.516 21:09:42 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:27.516 21:09:42 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:27.516 21:09:42 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:27.516 21:09:42 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:27.516 21:09:42 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.516 21:09:42 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:27.516 21:09:42 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:27.516 21:09:42 -- common/autotest_common.sh@1501 -- # (( 3 == 0 )) 00:04:27.516 21:09:42 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 0000:cb:00.0 00:04:27.516 21:09:42 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.815 Waiting for block devices as requested 00:04:30.816 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:04:30.816 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:30.816 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:30.816 0000:cb:00.0 (8086 0a54): vfio-pci -> nvme 00:04:30.816 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:31.076 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:04:31.076 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:31.336 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:04:31.336 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:31.336 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:04:31.596 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:31.596 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:04:31.855 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:04:31.855 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:04:32.116 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:04:32.116 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:32.376 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:04:32.376 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:04:32.636 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:04:32.896 21:09:47 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:32.896 21:09:47 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:c9:00.0 00:04:32.896 21:09:47 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1488 -- # grep 0000:c9:00.0/nvme/nvme 00:04:32.896 21:09:47 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:04:32.896 21:09:47 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 ]] 00:04:32.896 21:09:47 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:04:32.896 21:09:47 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:32.896 21:09:47 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:32.896 21:09:47 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:32.896 21:09:47 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:32.896 21:09:47 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:32.896 21:09:47 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:32.896 21:09:47 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:32.896 21:09:47 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:32.896 21:09:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:32.896 21:09:47 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:32.896 21:09:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:32.896 21:09:47 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:32.896 21:09:47 -- common/autotest_common.sh@1543 -- # continue 00:04:32.896 21:09:47 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:32.896 21:09:47 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:ca:00.0 00:04:32.896 21:09:47 -- common/autotest_common.sh@1488 -- # grep 0000:ca:00.0/nvme/nvme 00:04:32.896 21:09:47 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme2 ]] 00:04:32.896 21:09:47 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:32.896 21:09:47 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:32.896 21:09:47 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:32.896 21:09:47 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:32.896 21:09:47 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:32.896 21:09:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:32.896 21:09:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:32.896 21:09:47 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:32.896 21:09:47 -- common/autotest_common.sh@1543 -- # continue 00:04:32.896 21:09:47 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:32.896 21:09:47 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:cb:00.0 00:04:32.896 21:09:47 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1488 -- # grep 0000:cb:00.0/nvme/nvme 00:04:32.896 21:09:47 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:07.0/0000:cb:00.0/nvme/nvme1 00:04:32.896 21:09:47 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:07.0/0000:cb:00.0/nvme/nvme1 ]] 00:04:32.896 21:09:47 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:c7/0000:c7:07.0/0000:cb:00.0/nvme/nvme1 00:04:32.896 21:09:47 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:04:32.896 21:09:47 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:32.896 21:09:47 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:32.896 21:09:47 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:32.896 21:09:47 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:32.896 21:09:47 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:32.896 21:09:47 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:32.896 21:09:47 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:32.896 21:09:47 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:33.156 21:09:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:33.156 21:09:47 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:33.156 21:09:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:33.156 21:09:47 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:33.156 21:09:47 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:33.156 21:09:47 -- common/autotest_common.sh@1543 -- # continue 00:04:33.156 21:09:47 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:33.156 21:09:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:33.156 21:09:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.156 21:09:47 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:33.156 21:09:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:33.156 21:09:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.156 21:09:47 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:04:36.501 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:36.501 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:36.501 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:36.501 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:04:36.501 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:36.501 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:04:36.501 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:36.501 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:04:36.501 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:36.501 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:04:36.501 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:04:36.501 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:04:36.501 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:36.501 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:04:36.501 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:36.501 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:04:37.930 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:04:37.930 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:04:38.190 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:04:38.760 21:09:53 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:38.760 21:09:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:38.760 21:09:53 -- common/autotest_common.sh@10 -- # set +x 00:04:38.760 21:09:53 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:38.760 21:09:53 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:38.760 21:09:53 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:38.760 21:09:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:38.760 21:09:53 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:38.760 21:09:53 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:38.760 21:09:53 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:38.760 21:09:53 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:38.760 21:09:53 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:38.760 21:09:53 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:38.760 21:09:53 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:38.760 21:09:53 -- common/autotest_common.sh@1501 -- # (( 3 == 0 )) 00:04:38.760 21:09:53 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 0000:cb:00.0 00:04:38.760 21:09:53 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:38.760 21:09:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:c9:00.0/device 00:04:38.760 21:09:53 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:38.760 21:09:53 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:38.760 21:09:53 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:38.760 21:09:53 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:38.760 21:09:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:ca:00.0/device 00:04:38.760 21:09:53 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:38.760 21:09:53 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:38.760 21:09:53 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:38.760 21:09:53 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:38.760 21:09:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:cb:00.0/device 00:04:38.760 21:09:53 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:38.760 21:09:53 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:38.760 21:09:53 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:38.760 21:09:53 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 0000:cb:00.0 00:04:38.760 21:09:53 -- common/autotest_common.sh@1578 -- # [[ -z 0000:c9:00.0 ]] 00:04:38.760 21:09:53 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=1000939 00:04:38.760 21:09:53 -- common/autotest_common.sh@1584 -- # waitforlisten 1000939 00:04:38.760 21:09:53 -- common/autotest_common.sh@817 -- # '[' -z 1000939 ']' 00:04:38.761 21:09:53 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.761 21:09:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.761 21:09:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:38.761 21:09:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.761 21:09:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:38.761 21:09:53 -- common/autotest_common.sh@10 -- # set +x 00:04:39.021 [2024-04-24 21:09:53.745959] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:04:39.021 [2024-04-24 21:09:53.746099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000939 ] 00:04:39.021 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.021 [2024-04-24 21:09:53.876559] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.021 [2024-04-24 21:09:53.974688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.592 21:09:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:39.592 21:09:54 -- common/autotest_common.sh@850 -- # return 0 00:04:39.592 21:09:54 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:04:39.592 21:09:54 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:04:39.592 21:09:54 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:c9:00.0 00:04:42.894 nvme0n1 00:04:42.894 21:09:57 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:42.894 [2024-04-24 21:09:57.579744] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:42.894 request: 00:04:42.894 { 00:04:42.894 "nvme_ctrlr_name": "nvme0", 00:04:42.894 "password": "test", 00:04:42.894 "method": "bdev_nvme_opal_revert", 00:04:42.894 "req_id": 1 00:04:42.894 } 00:04:42.894 Got JSON-RPC error response 00:04:42.894 response: 00:04:42.894 { 00:04:42.894 "code": -32602, 00:04:42.894 "message": "Invalid parameters" 00:04:42.894 } 00:04:42.894 21:09:57 -- common/autotest_common.sh@1590 -- # true 00:04:42.894 21:09:57 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:42.894 21:09:57 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:04:42.894 21:09:57 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme1 -t pcie -a 0000:ca:00.0 00:04:46.192 nvme1n1 00:04:46.192 21:10:00 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme1 -p test 00:04:46.192 [2024-04-24 21:10:00.706993] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme1 not support opal 00:04:46.192 request: 00:04:46.192 { 00:04:46.192 "nvme_ctrlr_name": "nvme1", 00:04:46.192 "password": "test", 00:04:46.192 "method": "bdev_nvme_opal_revert", 00:04:46.192 "req_id": 1 00:04:46.192 } 00:04:46.192 Got JSON-RPC error response 00:04:46.192 response: 00:04:46.192 { 00:04:46.192 "code": -32602, 00:04:46.192 "message": "Invalid parameters" 00:04:46.192 } 00:04:46.192 21:10:00 -- common/autotest_common.sh@1590 -- # true 00:04:46.192 21:10:00 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:46.192 21:10:00 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:04:46.192 21:10:00 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme2 -t pcie -a 0000:cb:00.0 00:04:48.757 nvme2n1 00:04:48.757 21:10:03 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme2 -p test 00:04:49.017 [2024-04-24 21:10:03.838667] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme2 not support opal 00:04:49.017 request: 00:04:49.017 { 00:04:49.017 "nvme_ctrlr_name": "nvme2", 00:04:49.017 "password": "test", 00:04:49.017 "method": "bdev_nvme_opal_revert", 00:04:49.017 "req_id": 1 00:04:49.017 } 00:04:49.017 Got JSON-RPC error response 00:04:49.017 response: 00:04:49.017 { 00:04:49.017 "code": -32602, 00:04:49.017 "message": "Invalid parameters" 00:04:49.017 } 00:04:49.017 21:10:03 -- common/autotest_common.sh@1590 -- # true 00:04:49.017 21:10:03 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:49.018 21:10:03 -- common/autotest_common.sh@1594 -- # killprocess 1000939 00:04:49.018 21:10:03 -- common/autotest_common.sh@936 -- # '[' -z 1000939 ']' 00:04:49.018 21:10:03 -- common/autotest_common.sh@940 -- # kill -0 1000939 00:04:49.018 21:10:03 -- common/autotest_common.sh@941 -- # uname 00:04:49.018 21:10:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:49.018 21:10:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1000939 00:04:49.018 21:10:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:49.018 21:10:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:49.018 21:10:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1000939' 00:04:49.018 killing process with pid 1000939 00:04:49.018 21:10:03 -- common/autotest_common.sh@955 -- # kill 1000939 00:04:49.018 21:10:03 -- common/autotest_common.sh@960 -- # wait 1000939 00:04:53.226 21:10:07 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:53.226 21:10:07 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:53.226 21:10:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:53.226 21:10:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:53.226 21:10:07 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:53.226 21:10:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:53.226 21:10:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.226 21:10:07 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:04:53.226 21:10:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.226 21:10:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.226 21:10:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.226 ************************************ 00:04:53.226 START TEST env 00:04:53.226 ************************************ 00:04:53.226 21:10:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:04:53.226 * Looking for test storage... 00:04:53.226 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env 00:04:53.226 21:10:07 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.226 21:10:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.226 21:10:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.226 21:10:07 -- common/autotest_common.sh@10 -- # set +x 00:04:53.226 ************************************ 00:04:53.226 START TEST env_memory 00:04:53.226 ************************************ 00:04:53.226 21:10:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.226 00:04:53.226 00:04:53.226 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.226 http://cunit.sourceforge.net/ 00:04:53.226 00:04:53.226 00:04:53.226 Suite: memory 00:04:53.226 Test: alloc and free memory map ...[2024-04-24 21:10:08.009874] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:53.226 passed 00:04:53.226 Test: mem map translation ...[2024-04-24 21:10:08.057133] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:53.226 [2024-04-24 21:10:08.057167] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:53.226 [2024-04-24 21:10:08.057247] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:53.226 [2024-04-24 21:10:08.057286] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:53.226 passed 00:04:53.226 Test: mem map registration ...[2024-04-24 21:10:08.143157] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:53.226 [2024-04-24 21:10:08.143187] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:53.226 passed 00:04:53.486 Test: mem map adjacent registrations ...passed 00:04:53.486 00:04:53.486 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.486 suites 1 1 n/a 0 0 00:04:53.486 tests 4 4 4 0 0 00:04:53.486 asserts 152 152 152 0 n/a 00:04:53.486 00:04:53.486 Elapsed time = 0.292 seconds 00:04:53.486 00:04:53.486 real 0m0.316s 00:04:53.486 user 0m0.294s 00:04:53.487 sys 0m0.021s 00:04:53.487 21:10:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:53.487 21:10:08 -- common/autotest_common.sh@10 -- # set +x 00:04:53.487 ************************************ 00:04:53.487 END TEST env_memory 00:04:53.487 ************************************ 00:04:53.487 21:10:08 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.487 21:10:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.487 21:10:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.487 21:10:08 -- common/autotest_common.sh@10 -- # set +x 00:04:53.487 ************************************ 00:04:53.487 START TEST env_vtophys 00:04:53.487 ************************************ 00:04:53.487 21:10:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.487 EAL: lib.eal log level changed from notice to debug 00:04:53.487 EAL: Detected lcore 0 as core 0 on socket 0 00:04:53.487 EAL: Detected lcore 1 as core 1 on socket 0 00:04:53.487 EAL: Detected lcore 2 as core 2 on socket 0 00:04:53.487 EAL: Detected lcore 3 as core 3 on socket 0 00:04:53.487 EAL: Detected lcore 4 as core 4 on socket 0 00:04:53.487 EAL: Detected lcore 5 as core 5 on socket 0 00:04:53.487 EAL: Detected lcore 6 as core 6 on socket 0 00:04:53.487 EAL: Detected lcore 7 as core 7 on socket 0 00:04:53.487 EAL: Detected lcore 8 as core 8 on socket 0 00:04:53.487 EAL: Detected lcore 9 as core 9 on socket 0 00:04:53.487 EAL: Detected lcore 10 as core 10 on socket 0 00:04:53.487 EAL: Detected lcore 11 as core 11 on socket 0 00:04:53.487 EAL: Detected lcore 12 as core 12 on socket 0 00:04:53.487 EAL: Detected lcore 13 as core 13 on socket 0 00:04:53.487 EAL: Detected lcore 14 as core 14 on socket 0 00:04:53.487 EAL: Detected lcore 15 as core 15 on socket 0 00:04:53.487 EAL: Detected lcore 16 as core 16 on socket 0 00:04:53.487 EAL: Detected lcore 17 as core 17 on socket 0 00:04:53.487 EAL: Detected lcore 18 as core 18 on socket 0 00:04:53.487 EAL: Detected lcore 19 as core 19 on socket 0 00:04:53.487 EAL: Detected lcore 20 as core 20 on socket 0 00:04:53.487 EAL: Detected lcore 21 as core 21 on socket 0 00:04:53.487 EAL: Detected lcore 22 as core 22 on socket 0 00:04:53.487 EAL: Detected lcore 23 as core 23 on socket 0 00:04:53.487 EAL: Detected lcore 24 as core 24 on socket 0 00:04:53.487 EAL: Detected lcore 25 as core 25 on socket 0 00:04:53.487 EAL: Detected lcore 26 as core 26 on socket 0 00:04:53.487 EAL: Detected lcore 27 as core 27 on socket 0 00:04:53.487 EAL: Detected lcore 28 as core 28 on socket 0 00:04:53.487 EAL: Detected lcore 29 as core 29 on socket 0 00:04:53.487 EAL: Detected lcore 30 as core 30 on socket 0 00:04:53.487 EAL: Detected lcore 31 as core 31 on socket 0 00:04:53.487 EAL: Detected lcore 32 as core 0 on socket 1 00:04:53.487 EAL: Detected lcore 33 as core 1 on socket 1 00:04:53.487 EAL: Detected lcore 34 as core 2 on socket 1 00:04:53.487 EAL: Detected lcore 35 as core 3 on socket 1 00:04:53.487 EAL: Detected lcore 36 as core 4 on socket 1 00:04:53.487 EAL: Detected lcore 37 as core 5 on socket 1 00:04:53.487 EAL: Detected lcore 38 as core 6 on socket 1 00:04:53.487 EAL: Detected lcore 39 as core 7 on socket 1 00:04:53.487 EAL: Detected lcore 40 as core 8 on socket 1 00:04:53.487 EAL: Detected lcore 41 as core 9 on socket 1 00:04:53.487 EAL: Detected lcore 42 as core 10 on socket 1 00:04:53.487 EAL: Detected lcore 43 as core 11 on socket 1 00:04:53.487 EAL: Detected lcore 44 as core 12 on socket 1 00:04:53.487 EAL: Detected lcore 45 as core 13 on socket 1 00:04:53.487 EAL: Detected lcore 46 as core 14 on socket 1 00:04:53.487 EAL: Detected lcore 47 as core 15 on socket 1 00:04:53.487 EAL: Detected lcore 48 as core 16 on socket 1 00:04:53.487 EAL: Detected lcore 49 as core 17 on socket 1 00:04:53.487 EAL: Detected lcore 50 as core 18 on socket 1 00:04:53.487 EAL: Detected lcore 51 as core 19 on socket 1 00:04:53.487 EAL: Detected lcore 52 as core 20 on socket 1 00:04:53.487 EAL: Detected lcore 53 as core 21 on socket 1 00:04:53.487 EAL: Detected lcore 54 as core 22 on socket 1 00:04:53.487 EAL: Detected lcore 55 as core 23 on socket 1 00:04:53.487 EAL: Detected lcore 56 as core 24 on socket 1 00:04:53.487 EAL: Detected lcore 57 as core 25 on socket 1 00:04:53.487 EAL: Detected lcore 58 as core 26 on socket 1 00:04:53.487 EAL: Detected lcore 59 as core 27 on socket 1 00:04:53.487 EAL: Detected lcore 60 as core 28 on socket 1 00:04:53.487 EAL: Detected lcore 61 as core 29 on socket 1 00:04:53.487 EAL: Detected lcore 62 as core 30 on socket 1 00:04:53.487 EAL: Detected lcore 63 as core 31 on socket 1 00:04:53.487 EAL: Detected lcore 64 as core 0 on socket 0 00:04:53.487 EAL: Detected lcore 65 as core 1 on socket 0 00:04:53.487 EAL: Detected lcore 66 as core 2 on socket 0 00:04:53.487 EAL: Detected lcore 67 as core 3 on socket 0 00:04:53.487 EAL: Detected lcore 68 as core 4 on socket 0 00:04:53.487 EAL: Detected lcore 69 as core 5 on socket 0 00:04:53.487 EAL: Detected lcore 70 as core 6 on socket 0 00:04:53.487 EAL: Detected lcore 71 as core 7 on socket 0 00:04:53.487 EAL: Detected lcore 72 as core 8 on socket 0 00:04:53.487 EAL: Detected lcore 73 as core 9 on socket 0 00:04:53.487 EAL: Detected lcore 74 as core 10 on socket 0 00:04:53.487 EAL: Detected lcore 75 as core 11 on socket 0 00:04:53.487 EAL: Detected lcore 76 as core 12 on socket 0 00:04:53.487 EAL: Detected lcore 77 as core 13 on socket 0 00:04:53.487 EAL: Detected lcore 78 as core 14 on socket 0 00:04:53.487 EAL: Detected lcore 79 as core 15 on socket 0 00:04:53.487 EAL: Detected lcore 80 as core 16 on socket 0 00:04:53.487 EAL: Detected lcore 81 as core 17 on socket 0 00:04:53.487 EAL: Detected lcore 82 as core 18 on socket 0 00:04:53.487 EAL: Detected lcore 83 as core 19 on socket 0 00:04:53.487 EAL: Detected lcore 84 as core 20 on socket 0 00:04:53.487 EAL: Detected lcore 85 as core 21 on socket 0 00:04:53.487 EAL: Detected lcore 86 as core 22 on socket 0 00:04:53.487 EAL: Detected lcore 87 as core 23 on socket 0 00:04:53.487 EAL: Detected lcore 88 as core 24 on socket 0 00:04:53.487 EAL: Detected lcore 89 as core 25 on socket 0 00:04:53.487 EAL: Detected lcore 90 as core 26 on socket 0 00:04:53.487 EAL: Detected lcore 91 as core 27 on socket 0 00:04:53.487 EAL: Detected lcore 92 as core 28 on socket 0 00:04:53.487 EAL: Detected lcore 93 as core 29 on socket 0 00:04:53.487 EAL: Detected lcore 94 as core 30 on socket 0 00:04:53.487 EAL: Detected lcore 95 as core 31 on socket 0 00:04:53.487 EAL: Detected lcore 96 as core 0 on socket 1 00:04:53.487 EAL: Detected lcore 97 as core 1 on socket 1 00:04:53.487 EAL: Detected lcore 98 as core 2 on socket 1 00:04:53.487 EAL: Detected lcore 99 as core 3 on socket 1 00:04:53.487 EAL: Detected lcore 100 as core 4 on socket 1 00:04:53.487 EAL: Detected lcore 101 as core 5 on socket 1 00:04:53.487 EAL: Detected lcore 102 as core 6 on socket 1 00:04:53.487 EAL: Detected lcore 103 as core 7 on socket 1 00:04:53.487 EAL: Detected lcore 104 as core 8 on socket 1 00:04:53.487 EAL: Detected lcore 105 as core 9 on socket 1 00:04:53.487 EAL: Detected lcore 106 as core 10 on socket 1 00:04:53.487 EAL: Detected lcore 107 as core 11 on socket 1 00:04:53.487 EAL: Detected lcore 108 as core 12 on socket 1 00:04:53.487 EAL: Detected lcore 109 as core 13 on socket 1 00:04:53.487 EAL: Detected lcore 110 as core 14 on socket 1 00:04:53.487 EAL: Detected lcore 111 as core 15 on socket 1 00:04:53.487 EAL: Detected lcore 112 as core 16 on socket 1 00:04:53.487 EAL: Detected lcore 113 as core 17 on socket 1 00:04:53.487 EAL: Detected lcore 114 as core 18 on socket 1 00:04:53.487 EAL: Detected lcore 115 as core 19 on socket 1 00:04:53.487 EAL: Detected lcore 116 as core 20 on socket 1 00:04:53.487 EAL: Detected lcore 117 as core 21 on socket 1 00:04:53.487 EAL: Detected lcore 118 as core 22 on socket 1 00:04:53.487 EAL: Detected lcore 119 as core 23 on socket 1 00:04:53.487 EAL: Detected lcore 120 as core 24 on socket 1 00:04:53.487 EAL: Detected lcore 121 as core 25 on socket 1 00:04:53.487 EAL: Detected lcore 122 as core 26 on socket 1 00:04:53.487 EAL: Detected lcore 123 as core 27 on socket 1 00:04:53.487 EAL: Detected lcore 124 as core 28 on socket 1 00:04:53.487 EAL: Detected lcore 125 as core 29 on socket 1 00:04:53.487 EAL: Detected lcore 126 as core 30 on socket 1 00:04:53.487 EAL: Detected lcore 127 as core 31 on socket 1 00:04:53.487 EAL: Maximum logical cores by configuration: 128 00:04:53.487 EAL: Detected CPU lcores: 128 00:04:53.487 EAL: Detected NUMA nodes: 2 00:04:53.487 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:53.487 EAL: Detected shared linkage of DPDK 00:04:53.747 EAL: No shared files mode enabled, IPC will be disabled 00:04:53.747 EAL: Bus pci wants IOVA as 'DC' 00:04:53.747 EAL: Buses did not request a specific IOVA mode. 00:04:53.747 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:53.747 EAL: Selected IOVA mode 'VA' 00:04:53.747 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.747 EAL: Probing VFIO support... 00:04:53.747 EAL: IOMMU type 1 (Type 1) is supported 00:04:53.747 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:53.747 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:53.747 EAL: VFIO support initialized 00:04:53.747 EAL: Ask a virtual area of 0x2e000 bytes 00:04:53.747 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:53.747 EAL: Setting up physically contiguous memory... 00:04:53.747 EAL: Setting maximum number of open files to 524288 00:04:53.747 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:53.747 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:53.747 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:53.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.747 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:53.747 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.747 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:53.747 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:53.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.747 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:53.747 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.747 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:53.747 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:53.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.747 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:53.747 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.747 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:53.747 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:53.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.747 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:53.747 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.747 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:53.747 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:53.747 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:53.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.747 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:53.747 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.747 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:53.747 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:53.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.747 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:53.747 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.747 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:53.747 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:53.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.747 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:53.747 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.747 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:53.747 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:53.747 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.747 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:53.747 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.747 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.747 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:53.747 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:53.747 EAL: Hugepages will be freed exactly as allocated. 00:04:53.747 EAL: No shared files mode enabled, IPC is disabled 00:04:53.747 EAL: No shared files mode enabled, IPC is disabled 00:04:53.747 EAL: TSC frequency is ~1900000 KHz 00:04:53.748 EAL: Main lcore 0 is ready (tid=7f4e8f569a40;cpuset=[0]) 00:04:53.748 EAL: Trying to obtain current memory policy. 00:04:53.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.748 EAL: Restoring previous memory policy: 0 00:04:53.748 EAL: request: mp_malloc_sync 00:04:53.748 EAL: No shared files mode enabled, IPC is disabled 00:04:53.748 EAL: Heap on socket 0 was expanded by 2MB 00:04:53.748 EAL: No shared files mode enabled, IPC is disabled 00:04:53.748 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:53.748 EAL: Mem event callback 'spdk:(nil)' registered 00:04:53.748 00:04:53.748 00:04:53.748 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.748 http://cunit.sourceforge.net/ 00:04:53.748 00:04:53.748 00:04:53.748 Suite: components_suite 00:04:54.009 Test: vtophys_malloc_test ...passed 00:04:54.009 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:54.009 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.009 EAL: Restoring previous memory policy: 4 00:04:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.009 EAL: request: mp_malloc_sync 00:04:54.009 EAL: No shared files mode enabled, IPC is disabled 00:04:54.009 EAL: Heap on socket 0 was expanded by 4MB 00:04:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.009 EAL: request: mp_malloc_sync 00:04:54.009 EAL: No shared files mode enabled, IPC is disabled 00:04:54.009 EAL: Heap on socket 0 was shrunk by 4MB 00:04:54.009 EAL: Trying to obtain current memory policy. 00:04:54.009 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.009 EAL: Restoring previous memory policy: 4 00:04:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.009 EAL: request: mp_malloc_sync 00:04:54.009 EAL: No shared files mode enabled, IPC is disabled 00:04:54.009 EAL: Heap on socket 0 was expanded by 6MB 00:04:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.009 EAL: request: mp_malloc_sync 00:04:54.009 EAL: No shared files mode enabled, IPC is disabled 00:04:54.009 EAL: Heap on socket 0 was shrunk by 6MB 00:04:54.009 EAL: Trying to obtain current memory policy. 00:04:54.009 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.009 EAL: Restoring previous memory policy: 4 00:04:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.009 EAL: request: mp_malloc_sync 00:04:54.009 EAL: No shared files mode enabled, IPC is disabled 00:04:54.009 EAL: Heap on socket 0 was expanded by 10MB 00:04:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.009 EAL: request: mp_malloc_sync 00:04:54.009 EAL: No shared files mode enabled, IPC is disabled 00:04:54.009 EAL: Heap on socket 0 was shrunk by 10MB 00:04:54.009 EAL: Trying to obtain current memory policy. 00:04:54.009 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.009 EAL: Restoring previous memory policy: 4 00:04:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.009 EAL: request: mp_malloc_sync 00:04:54.009 EAL: No shared files mode enabled, IPC is disabled 00:04:54.009 EAL: Heap on socket 0 was expanded by 18MB 00:04:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.009 EAL: request: mp_malloc_sync 00:04:54.009 EAL: No shared files mode enabled, IPC is disabled 00:04:54.009 EAL: Heap on socket 0 was shrunk by 18MB 00:04:54.009 EAL: Trying to obtain current memory policy. 00:04:54.009 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.009 EAL: Restoring previous memory policy: 4 00:04:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.009 EAL: request: mp_malloc_sync 00:04:54.009 EAL: No shared files mode enabled, IPC is disabled 00:04:54.009 EAL: Heap on socket 0 was expanded by 34MB 00:04:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.009 EAL: request: mp_malloc_sync 00:04:54.009 EAL: No shared files mode enabled, IPC is disabled 00:04:54.009 EAL: Heap on socket 0 was shrunk by 34MB 00:04:54.009 EAL: Trying to obtain current memory policy. 00:04:54.009 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.009 EAL: Restoring previous memory policy: 4 00:04:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.009 EAL: request: mp_malloc_sync 00:04:54.009 EAL: No shared files mode enabled, IPC is disabled 00:04:54.009 EAL: Heap on socket 0 was expanded by 66MB 00:04:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.009 EAL: request: mp_malloc_sync 00:04:54.009 EAL: No shared files mode enabled, IPC is disabled 00:04:54.009 EAL: Heap on socket 0 was shrunk by 66MB 00:04:54.009 EAL: Trying to obtain current memory policy. 00:04:54.009 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.009 EAL: Restoring previous memory policy: 4 00:04:54.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.009 EAL: request: mp_malloc_sync 00:04:54.009 EAL: No shared files mode enabled, IPC is disabled 00:04:54.009 EAL: Heap on socket 0 was expanded by 130MB 00:04:54.270 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.270 EAL: request: mp_malloc_sync 00:04:54.270 EAL: No shared files mode enabled, IPC is disabled 00:04:54.270 EAL: Heap on socket 0 was shrunk by 130MB 00:04:54.270 EAL: Trying to obtain current memory policy. 00:04:54.270 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.270 EAL: Restoring previous memory policy: 4 00:04:54.270 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.270 EAL: request: mp_malloc_sync 00:04:54.270 EAL: No shared files mode enabled, IPC is disabled 00:04:54.270 EAL: Heap on socket 0 was expanded by 258MB 00:04:54.531 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.531 EAL: request: mp_malloc_sync 00:04:54.531 EAL: No shared files mode enabled, IPC is disabled 00:04:54.531 EAL: Heap on socket 0 was shrunk by 258MB 00:04:54.531 EAL: Trying to obtain current memory policy. 00:04:54.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.531 EAL: Restoring previous memory policy: 4 00:04:54.531 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.531 EAL: request: mp_malloc_sync 00:04:54.531 EAL: No shared files mode enabled, IPC is disabled 00:04:54.531 EAL: Heap on socket 0 was expanded by 514MB 00:04:55.101 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.101 EAL: request: mp_malloc_sync 00:04:55.101 EAL: No shared files mode enabled, IPC is disabled 00:04:55.101 EAL: Heap on socket 0 was shrunk by 514MB 00:04:55.359 EAL: Trying to obtain current memory policy. 00:04:55.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.359 EAL: Restoring previous memory policy: 4 00:04:55.359 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.359 EAL: request: mp_malloc_sync 00:04:55.359 EAL: No shared files mode enabled, IPC is disabled 00:04:55.359 EAL: Heap on socket 0 was expanded by 1026MB 00:04:55.938 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.198 EAL: request: mp_malloc_sync 00:04:56.198 EAL: No shared files mode enabled, IPC is disabled 00:04:56.198 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:56.769 passed 00:04:56.769 00:04:56.769 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.769 suites 1 1 n/a 0 0 00:04:56.769 tests 2 2 2 0 0 00:04:56.769 asserts 497 497 497 0 n/a 00:04:56.769 00:04:56.769 Elapsed time = 2.839 seconds 00:04:56.769 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.769 EAL: request: mp_malloc_sync 00:04:56.769 EAL: No shared files mode enabled, IPC is disabled 00:04:56.769 EAL: Heap on socket 0 was shrunk by 2MB 00:04:56.769 EAL: No shared files mode enabled, IPC is disabled 00:04:56.769 EAL: No shared files mode enabled, IPC is disabled 00:04:56.769 EAL: No shared files mode enabled, IPC is disabled 00:04:56.769 00:04:56.769 real 0m3.066s 00:04:56.769 user 0m2.407s 00:04:56.769 sys 0m0.613s 00:04:56.769 21:10:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.769 21:10:11 -- common/autotest_common.sh@10 -- # set +x 00:04:56.769 ************************************ 00:04:56.769 END TEST env_vtophys 00:04:56.769 ************************************ 00:04:56.769 21:10:11 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:04:56.769 21:10:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.769 21:10:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.769 21:10:11 -- common/autotest_common.sh@10 -- # set +x 00:04:56.769 ************************************ 00:04:56.769 START TEST env_pci 00:04:56.769 ************************************ 00:04:56.769 21:10:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:04:56.769 00:04:56.769 00:04:56.769 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.769 http://cunit.sourceforge.net/ 00:04:56.769 00:04:56.769 00:04:56.769 Suite: pci 00:04:56.769 Test: pci_hook ...[2024-04-24 21:10:11.599879] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1004410 has claimed it 00:04:56.769 EAL: Cannot find device (10000:00:01.0) 00:04:56.769 EAL: Failed to attach device on primary process 00:04:56.769 passed 00:04:56.769 00:04:56.769 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.769 suites 1 1 n/a 0 0 00:04:56.769 tests 1 1 1 0 0 00:04:56.769 asserts 25 25 25 0 n/a 00:04:56.769 00:04:56.769 Elapsed time = 0.052 seconds 00:04:56.769 00:04:56.769 real 0m0.106s 00:04:56.769 user 0m0.036s 00:04:56.769 sys 0m0.068s 00:04:56.769 21:10:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.769 21:10:11 -- common/autotest_common.sh@10 -- # set +x 00:04:56.769 ************************************ 00:04:56.769 END TEST env_pci 00:04:56.769 ************************************ 00:04:56.769 21:10:11 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:56.769 21:10:11 -- env/env.sh@15 -- # uname 00:04:56.769 21:10:11 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:56.769 21:10:11 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:56.769 21:10:11 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.769 21:10:11 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:56.769 21:10:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.769 21:10:11 -- common/autotest_common.sh@10 -- # set +x 00:04:57.029 ************************************ 00:04:57.029 START TEST env_dpdk_post_init 00:04:57.029 ************************************ 00:04:57.029 21:10:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:57.029 EAL: Detected CPU lcores: 128 00:04:57.029 EAL: Detected NUMA nodes: 2 00:04:57.029 EAL: Detected shared linkage of DPDK 00:04:57.029 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:57.029 EAL: Selected IOVA mode 'VA' 00:04:57.029 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.029 EAL: VFIO support initialized 00:04:57.029 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:57.029 EAL: Using IOMMU type 1 (Type 1) 00:04:57.288 EAL: Ignore mapping IO port bar(1) 00:04:57.288 EAL: Ignore mapping IO port bar(3) 00:04:57.288 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6a:01.0 (socket 0) 00:04:57.550 EAL: Ignore mapping IO port bar(1) 00:04:57.550 EAL: Ignore mapping IO port bar(3) 00:04:57.550 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6a:02.0 (socket 0) 00:04:57.811 EAL: Ignore mapping IO port bar(1) 00:04:57.811 EAL: Ignore mapping IO port bar(3) 00:04:57.811 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6f:01.0 (socket 0) 00:04:57.811 EAL: Ignore mapping IO port bar(1) 00:04:57.811 EAL: Ignore mapping IO port bar(3) 00:04:58.073 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6f:02.0 (socket 0) 00:04:58.073 EAL: Ignore mapping IO port bar(1) 00:04:58.073 EAL: Ignore mapping IO port bar(3) 00:04:58.334 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:74:01.0 (socket 0) 00:04:58.334 EAL: Ignore mapping IO port bar(1) 00:04:58.334 EAL: Ignore mapping IO port bar(3) 00:04:58.334 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:74:02.0 (socket 0) 00:04:58.595 EAL: Ignore mapping IO port bar(1) 00:04:58.595 EAL: Ignore mapping IO port bar(3) 00:04:58.595 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:79:01.0 (socket 0) 00:04:58.856 EAL: Ignore mapping IO port bar(1) 00:04:58.856 EAL: Ignore mapping IO port bar(3) 00:04:58.856 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:79:02.0 (socket 0) 00:04:59.798 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:c9:00.0 (socket 1) 00:05:00.379 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:ca:00.0 (socket 1) 00:05:00.989 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:cb:00.0 (socket 1) 00:05:01.251 EAL: Ignore mapping IO port bar(1) 00:05:01.251 EAL: Ignore mapping IO port bar(3) 00:05:01.251 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:e7:01.0 (socket 1) 00:05:01.511 EAL: Ignore mapping IO port bar(1) 00:05:01.511 EAL: Ignore mapping IO port bar(3) 00:05:01.511 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:e7:02.0 (socket 1) 00:05:01.771 EAL: Ignore mapping IO port bar(1) 00:05:01.771 EAL: Ignore mapping IO port bar(3) 00:05:01.771 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:ec:01.0 (socket 1) 00:05:02.032 EAL: Ignore mapping IO port bar(1) 00:05:02.032 EAL: Ignore mapping IO port bar(3) 00:05:02.032 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:ec:02.0 (socket 1) 00:05:02.032 EAL: Ignore mapping IO port bar(1) 00:05:02.032 EAL: Ignore mapping IO port bar(3) 00:05:02.292 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f1:01.0 (socket 1) 00:05:02.292 EAL: Ignore mapping IO port bar(1) 00:05:02.292 EAL: Ignore mapping IO port bar(3) 00:05:02.552 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f1:02.0 (socket 1) 00:05:02.552 EAL: Ignore mapping IO port bar(1) 00:05:02.552 EAL: Ignore mapping IO port bar(3) 00:05:02.552 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f6:01.0 (socket 1) 00:05:02.812 EAL: Ignore mapping IO port bar(1) 00:05:02.812 EAL: Ignore mapping IO port bar(3) 00:05:02.812 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f6:02.0 (socket 1) 00:05:07.018 EAL: Releasing PCI mapped resource for 0000:ca:00.0 00:05:07.018 EAL: Calling pci_unmap_resource for 0000:ca:00.0 at 0x202001184000 00:05:07.278 EAL: Releasing PCI mapped resource for 0000:c9:00.0 00:05:07.278 EAL: Calling pci_unmap_resource for 0000:c9:00.0 at 0x202001180000 00:05:07.537 EAL: Releasing PCI mapped resource for 0000:cb:00.0 00:05:07.537 EAL: Calling pci_unmap_resource for 0000:cb:00.0 at 0x202001188000 00:05:07.798 Starting DPDK initialization... 00:05:07.798 Starting SPDK post initialization... 00:05:07.798 SPDK NVMe probe 00:05:07.798 Attaching to 0000:c9:00.0 00:05:07.798 Attaching to 0000:ca:00.0 00:05:07.798 Attaching to 0000:cb:00.0 00:05:07.798 Attached to 0000:c9:00.0 00:05:07.798 Attached to 0000:ca:00.0 00:05:07.798 Attached to 0000:cb:00.0 00:05:07.798 Cleaning up... 00:05:09.713 00:05:09.713 real 0m12.600s 00:05:09.713 user 0m4.971s 00:05:09.713 sys 0m0.180s 00:05:09.713 21:10:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.713 21:10:24 -- common/autotest_common.sh@10 -- # set +x 00:05:09.713 ************************************ 00:05:09.713 END TEST env_dpdk_post_init 00:05:09.713 ************************************ 00:05:09.713 21:10:24 -- env/env.sh@26 -- # uname 00:05:09.713 21:10:24 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:09.713 21:10:24 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:09.713 21:10:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.713 21:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.713 21:10:24 -- common/autotest_common.sh@10 -- # set +x 00:05:09.713 ************************************ 00:05:09.713 START TEST env_mem_callbacks 00:05:09.713 ************************************ 00:05:09.713 21:10:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:09.713 EAL: Detected CPU lcores: 128 00:05:09.713 EAL: Detected NUMA nodes: 2 00:05:09.713 EAL: Detected shared linkage of DPDK 00:05:09.713 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.713 EAL: Selected IOVA mode 'VA' 00:05:09.713 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.713 EAL: VFIO support initialized 00:05:09.713 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.713 00:05:09.713 00:05:09.713 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.713 http://cunit.sourceforge.net/ 00:05:09.713 00:05:09.713 00:05:09.713 Suite: memory 00:05:09.713 Test: test ... 00:05:09.713 register 0x200000200000 2097152 00:05:09.713 malloc 3145728 00:05:09.713 register 0x200000400000 4194304 00:05:09.713 buf 0x2000004fffc0 len 3145728 PASSED 00:05:09.713 malloc 64 00:05:09.713 buf 0x2000004ffec0 len 64 PASSED 00:05:09.713 malloc 4194304 00:05:09.713 register 0x200000800000 6291456 00:05:09.713 buf 0x2000009fffc0 len 4194304 PASSED 00:05:09.713 free 0x2000004fffc0 3145728 00:05:09.713 free 0x2000004ffec0 64 00:05:09.973 unregister 0x200000400000 4194304 PASSED 00:05:09.973 free 0x2000009fffc0 4194304 00:05:09.973 unregister 0x200000800000 6291456 PASSED 00:05:09.973 malloc 8388608 00:05:09.973 register 0x200000400000 10485760 00:05:09.973 buf 0x2000005fffc0 len 8388608 PASSED 00:05:09.973 free 0x2000005fffc0 8388608 00:05:09.973 unregister 0x200000400000 10485760 PASSED 00:05:09.973 passed 00:05:09.973 00:05:09.973 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.973 suites 1 1 n/a 0 0 00:05:09.973 tests 1 1 1 0 0 00:05:09.974 asserts 15 15 15 0 n/a 00:05:09.974 00:05:09.974 Elapsed time = 0.023 seconds 00:05:09.974 00:05:09.974 real 0m0.160s 00:05:09.974 user 0m0.056s 00:05:09.974 sys 0m0.104s 00:05:09.974 21:10:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.974 21:10:24 -- common/autotest_common.sh@10 -- # set +x 00:05:09.974 ************************************ 00:05:09.974 END TEST env_mem_callbacks 00:05:09.974 ************************************ 00:05:09.974 00:05:09.974 real 0m16.953s 00:05:09.974 user 0m7.987s 00:05:09.974 sys 0m1.436s 00:05:09.974 21:10:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.974 21:10:24 -- common/autotest_common.sh@10 -- # set +x 00:05:09.974 ************************************ 00:05:09.974 END TEST env 00:05:09.974 ************************************ 00:05:09.974 21:10:24 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:05:09.974 21:10:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.974 21:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.974 21:10:24 -- common/autotest_common.sh@10 -- # set +x 00:05:09.974 ************************************ 00:05:09.974 START TEST rpc 00:05:09.974 ************************************ 00:05:09.974 21:10:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:05:10.235 * Looking for test storage... 00:05:10.235 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:05:10.235 21:10:24 -- rpc/rpc.sh@65 -- # spdk_pid=1007177 00:05:10.235 21:10:24 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.235 21:10:24 -- rpc/rpc.sh@67 -- # waitforlisten 1007177 00:05:10.235 21:10:24 -- common/autotest_common.sh@817 -- # '[' -z 1007177 ']' 00:05:10.235 21:10:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.235 21:10:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:10.235 21:10:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.235 21:10:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:10.235 21:10:24 -- common/autotest_common.sh@10 -- # set +x 00:05:10.235 21:10:24 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:10.235 [2024-04-24 21:10:25.077473] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:05:10.235 [2024-04-24 21:10:25.077610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007177 ] 00:05:10.235 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.495 [2024-04-24 21:10:25.212511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.495 [2024-04-24 21:10:25.307730] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:10.495 [2024-04-24 21:10:25.307770] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1007177' to capture a snapshot of events at runtime. 00:05:10.495 [2024-04-24 21:10:25.307782] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:10.495 [2024-04-24 21:10:25.307791] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:10.495 [2024-04-24 21:10:25.307800] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1007177 for offline analysis/debug. 00:05:10.495 [2024-04-24 21:10:25.307837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.067 21:10:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:11.067 21:10:25 -- common/autotest_common.sh@850 -- # return 0 00:05:11.067 21:10:25 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:05:11.067 21:10:25 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:05:11.067 21:10:25 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:11.067 21:10:25 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:11.067 21:10:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.067 21:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.067 21:10:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.067 ************************************ 00:05:11.067 START TEST rpc_integrity 00:05:11.067 ************************************ 00:05:11.067 21:10:25 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:11.067 21:10:25 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.067 21:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.067 21:10:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.067 21:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.067 21:10:25 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.067 21:10:25 -- rpc/rpc.sh@13 -- # jq length 00:05:11.067 21:10:25 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.067 21:10:25 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.067 21:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.067 21:10:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.067 21:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.067 21:10:25 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:11.067 21:10:25 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.067 21:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.067 21:10:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.067 21:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.067 21:10:25 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.067 { 00:05:11.067 "name": "Malloc0", 00:05:11.067 "aliases": [ 00:05:11.067 "be49e3af-a1ed-4595-985d-db9dded86920" 00:05:11.067 ], 00:05:11.067 "product_name": "Malloc disk", 00:05:11.067 "block_size": 512, 00:05:11.067 "num_blocks": 16384, 00:05:11.067 "uuid": "be49e3af-a1ed-4595-985d-db9dded86920", 00:05:11.067 "assigned_rate_limits": { 00:05:11.067 "rw_ios_per_sec": 0, 00:05:11.067 "rw_mbytes_per_sec": 0, 00:05:11.067 "r_mbytes_per_sec": 0, 00:05:11.067 "w_mbytes_per_sec": 0 00:05:11.067 }, 00:05:11.067 "claimed": false, 00:05:11.067 "zoned": false, 00:05:11.067 "supported_io_types": { 00:05:11.067 "read": true, 00:05:11.067 "write": true, 00:05:11.067 "unmap": true, 00:05:11.067 "write_zeroes": true, 00:05:11.067 "flush": true, 00:05:11.067 "reset": true, 00:05:11.067 "compare": false, 00:05:11.067 "compare_and_write": false, 00:05:11.067 "abort": true, 00:05:11.067 "nvme_admin": false, 00:05:11.067 "nvme_io": false 00:05:11.067 }, 00:05:11.067 "memory_domains": [ 00:05:11.067 { 00:05:11.067 "dma_device_id": "system", 00:05:11.067 "dma_device_type": 1 00:05:11.067 }, 00:05:11.067 { 00:05:11.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.067 "dma_device_type": 2 00:05:11.067 } 00:05:11.067 ], 00:05:11.067 "driver_specific": {} 00:05:11.067 } 00:05:11.068 ]' 00:05:11.068 21:10:25 -- rpc/rpc.sh@17 -- # jq length 00:05:11.068 21:10:25 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.068 21:10:25 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:11.068 21:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.068 21:10:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.068 [2024-04-24 21:10:25.967266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:11.068 [2024-04-24 21:10:25.967321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.068 [2024-04-24 21:10:25.967347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000020180 00:05:11.068 [2024-04-24 21:10:25.967357] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.068 [2024-04-24 21:10:25.969059] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.068 [2024-04-24 21:10:25.969086] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.068 Passthru0 00:05:11.068 21:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.068 21:10:25 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.068 21:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.068 21:10:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.068 21:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.068 21:10:25 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.068 { 00:05:11.068 "name": "Malloc0", 00:05:11.068 "aliases": [ 00:05:11.068 "be49e3af-a1ed-4595-985d-db9dded86920" 00:05:11.068 ], 00:05:11.068 "product_name": "Malloc disk", 00:05:11.068 "block_size": 512, 00:05:11.068 "num_blocks": 16384, 00:05:11.068 "uuid": "be49e3af-a1ed-4595-985d-db9dded86920", 00:05:11.068 "assigned_rate_limits": { 00:05:11.068 "rw_ios_per_sec": 0, 00:05:11.068 "rw_mbytes_per_sec": 0, 00:05:11.068 "r_mbytes_per_sec": 0, 00:05:11.068 "w_mbytes_per_sec": 0 00:05:11.068 }, 00:05:11.068 "claimed": true, 00:05:11.068 "claim_type": "exclusive_write", 00:05:11.068 "zoned": false, 00:05:11.068 "supported_io_types": { 00:05:11.068 "read": true, 00:05:11.068 "write": true, 00:05:11.068 "unmap": true, 00:05:11.068 "write_zeroes": true, 00:05:11.068 "flush": true, 00:05:11.068 "reset": true, 00:05:11.068 "compare": false, 00:05:11.068 "compare_and_write": false, 00:05:11.068 "abort": true, 00:05:11.068 "nvme_admin": false, 00:05:11.068 "nvme_io": false 00:05:11.068 }, 00:05:11.068 "memory_domains": [ 00:05:11.068 { 00:05:11.068 "dma_device_id": "system", 00:05:11.068 "dma_device_type": 1 00:05:11.068 }, 00:05:11.068 { 00:05:11.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.068 "dma_device_type": 2 00:05:11.068 } 00:05:11.068 ], 00:05:11.068 "driver_specific": {} 00:05:11.068 }, 00:05:11.068 { 00:05:11.068 "name": "Passthru0", 00:05:11.068 "aliases": [ 00:05:11.068 "4ed2ceb0-5e5c-5ff3-9f31-75f0f74237d4" 00:05:11.068 ], 00:05:11.068 "product_name": "passthru", 00:05:11.068 "block_size": 512, 00:05:11.068 "num_blocks": 16384, 00:05:11.068 "uuid": "4ed2ceb0-5e5c-5ff3-9f31-75f0f74237d4", 00:05:11.068 "assigned_rate_limits": { 00:05:11.068 "rw_ios_per_sec": 0, 00:05:11.068 "rw_mbytes_per_sec": 0, 00:05:11.068 "r_mbytes_per_sec": 0, 00:05:11.068 "w_mbytes_per_sec": 0 00:05:11.068 }, 00:05:11.068 "claimed": false, 00:05:11.068 "zoned": false, 00:05:11.068 "supported_io_types": { 00:05:11.068 "read": true, 00:05:11.068 "write": true, 00:05:11.068 "unmap": true, 00:05:11.068 "write_zeroes": true, 00:05:11.068 "flush": true, 00:05:11.068 "reset": true, 00:05:11.068 "compare": false, 00:05:11.068 "compare_and_write": false, 00:05:11.068 "abort": true, 00:05:11.068 "nvme_admin": false, 00:05:11.068 "nvme_io": false 00:05:11.068 }, 00:05:11.068 "memory_domains": [ 00:05:11.068 { 00:05:11.068 "dma_device_id": "system", 00:05:11.068 "dma_device_type": 1 00:05:11.068 }, 00:05:11.068 { 00:05:11.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.068 "dma_device_type": 2 00:05:11.068 } 00:05:11.068 ], 00:05:11.068 "driver_specific": { 00:05:11.068 "passthru": { 00:05:11.068 "name": "Passthru0", 00:05:11.068 "base_bdev_name": "Malloc0" 00:05:11.068 } 00:05:11.068 } 00:05:11.068 } 00:05:11.068 ]' 00:05:11.068 21:10:25 -- rpc/rpc.sh@21 -- # jq length 00:05:11.068 21:10:26 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.068 21:10:26 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.068 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.068 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.068 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.068 21:10:26 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:11.068 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.068 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.330 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.330 21:10:26 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.330 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.330 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.330 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.330 21:10:26 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.330 21:10:26 -- rpc/rpc.sh@26 -- # jq length 00:05:11.330 21:10:26 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.330 00:05:11.330 real 0m0.220s 00:05:11.330 user 0m0.132s 00:05:11.330 sys 0m0.025s 00:05:11.330 21:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.330 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.330 ************************************ 00:05:11.330 END TEST rpc_integrity 00:05:11.330 ************************************ 00:05:11.330 21:10:26 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:11.330 21:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.330 21:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.330 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.330 ************************************ 00:05:11.330 START TEST rpc_plugins 00:05:11.330 ************************************ 00:05:11.330 21:10:26 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:05:11.330 21:10:26 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:11.330 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.330 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.330 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.330 21:10:26 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:11.330 21:10:26 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:11.330 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.330 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.330 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.330 21:10:26 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:11.330 { 00:05:11.330 "name": "Malloc1", 00:05:11.330 "aliases": [ 00:05:11.330 "20adc002-4e79-4c14-a299-511e7e1f63d5" 00:05:11.330 ], 00:05:11.330 "product_name": "Malloc disk", 00:05:11.330 "block_size": 4096, 00:05:11.330 "num_blocks": 256, 00:05:11.330 "uuid": "20adc002-4e79-4c14-a299-511e7e1f63d5", 00:05:11.330 "assigned_rate_limits": { 00:05:11.330 "rw_ios_per_sec": 0, 00:05:11.330 "rw_mbytes_per_sec": 0, 00:05:11.330 "r_mbytes_per_sec": 0, 00:05:11.330 "w_mbytes_per_sec": 0 00:05:11.330 }, 00:05:11.330 "claimed": false, 00:05:11.330 "zoned": false, 00:05:11.330 "supported_io_types": { 00:05:11.330 "read": true, 00:05:11.330 "write": true, 00:05:11.330 "unmap": true, 00:05:11.330 "write_zeroes": true, 00:05:11.330 "flush": true, 00:05:11.330 "reset": true, 00:05:11.330 "compare": false, 00:05:11.330 "compare_and_write": false, 00:05:11.330 "abort": true, 00:05:11.330 "nvme_admin": false, 00:05:11.330 "nvme_io": false 00:05:11.330 }, 00:05:11.330 "memory_domains": [ 00:05:11.330 { 00:05:11.330 "dma_device_id": "system", 00:05:11.330 "dma_device_type": 1 00:05:11.330 }, 00:05:11.330 { 00:05:11.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.330 "dma_device_type": 2 00:05:11.330 } 00:05:11.330 ], 00:05:11.330 "driver_specific": {} 00:05:11.330 } 00:05:11.330 ]' 00:05:11.330 21:10:26 -- rpc/rpc.sh@32 -- # jq length 00:05:11.330 21:10:26 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:11.330 21:10:26 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:11.330 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.330 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.330 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.330 21:10:26 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:11.330 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.330 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.330 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.330 21:10:26 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:11.330 21:10:26 -- rpc/rpc.sh@36 -- # jq length 00:05:11.592 21:10:26 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:11.592 00:05:11.592 real 0m0.108s 00:05:11.592 user 0m0.064s 00:05:11.592 sys 0m0.013s 00:05:11.592 21:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.592 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.592 ************************************ 00:05:11.592 END TEST rpc_plugins 00:05:11.592 ************************************ 00:05:11.592 21:10:26 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:11.592 21:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.592 21:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.592 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.592 ************************************ 00:05:11.592 START TEST rpc_trace_cmd_test 00:05:11.592 ************************************ 00:05:11.592 21:10:26 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:05:11.592 21:10:26 -- rpc/rpc.sh@40 -- # local info 00:05:11.592 21:10:26 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:11.592 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.592 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.592 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.592 21:10:26 -- rpc/rpc.sh@42 -- # info='{ 00:05:11.592 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1007177", 00:05:11.592 "tpoint_group_mask": "0x8", 00:05:11.592 "iscsi_conn": { 00:05:11.592 "mask": "0x2", 00:05:11.592 "tpoint_mask": "0x0" 00:05:11.592 }, 00:05:11.592 "scsi": { 00:05:11.592 "mask": "0x4", 00:05:11.592 "tpoint_mask": "0x0" 00:05:11.592 }, 00:05:11.592 "bdev": { 00:05:11.592 "mask": "0x8", 00:05:11.592 "tpoint_mask": "0xffffffffffffffff" 00:05:11.592 }, 00:05:11.592 "nvmf_rdma": { 00:05:11.592 "mask": "0x10", 00:05:11.592 "tpoint_mask": "0x0" 00:05:11.592 }, 00:05:11.592 "nvmf_tcp": { 00:05:11.592 "mask": "0x20", 00:05:11.592 "tpoint_mask": "0x0" 00:05:11.592 }, 00:05:11.592 "ftl": { 00:05:11.592 "mask": "0x40", 00:05:11.592 "tpoint_mask": "0x0" 00:05:11.592 }, 00:05:11.592 "blobfs": { 00:05:11.592 "mask": "0x80", 00:05:11.592 "tpoint_mask": "0x0" 00:05:11.592 }, 00:05:11.592 "dsa": { 00:05:11.592 "mask": "0x200", 00:05:11.592 "tpoint_mask": "0x0" 00:05:11.592 }, 00:05:11.592 "thread": { 00:05:11.592 "mask": "0x400", 00:05:11.592 "tpoint_mask": "0x0" 00:05:11.592 }, 00:05:11.592 "nvme_pcie": { 00:05:11.592 "mask": "0x800", 00:05:11.592 "tpoint_mask": "0x0" 00:05:11.592 }, 00:05:11.592 "iaa": { 00:05:11.592 "mask": "0x1000", 00:05:11.592 "tpoint_mask": "0x0" 00:05:11.592 }, 00:05:11.592 "nvme_tcp": { 00:05:11.592 "mask": "0x2000", 00:05:11.592 "tpoint_mask": "0x0" 00:05:11.592 }, 00:05:11.592 "bdev_nvme": { 00:05:11.592 "mask": "0x4000", 00:05:11.592 "tpoint_mask": "0x0" 00:05:11.592 }, 00:05:11.593 "sock": { 00:05:11.593 "mask": "0x8000", 00:05:11.593 "tpoint_mask": "0x0" 00:05:11.593 } 00:05:11.593 }' 00:05:11.593 21:10:26 -- rpc/rpc.sh@43 -- # jq length 00:05:11.593 21:10:26 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:11.593 21:10:26 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:11.593 21:10:26 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:11.593 21:10:26 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:11.852 21:10:26 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:11.852 21:10:26 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:11.852 21:10:26 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:11.852 21:10:26 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:11.852 21:10:26 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:11.852 00:05:11.852 real 0m0.177s 00:05:11.852 user 0m0.146s 00:05:11.852 sys 0m0.022s 00:05:11.852 21:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.852 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.852 ************************************ 00:05:11.852 END TEST rpc_trace_cmd_test 00:05:11.852 ************************************ 00:05:11.852 21:10:26 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:11.852 21:10:26 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:11.852 21:10:26 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:11.852 21:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.852 21:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.852 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.852 ************************************ 00:05:11.852 START TEST rpc_daemon_integrity 00:05:11.852 ************************************ 00:05:11.852 21:10:26 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:11.852 21:10:26 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.852 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.852 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.852 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.852 21:10:26 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.852 21:10:26 -- rpc/rpc.sh@13 -- # jq length 00:05:11.852 21:10:26 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.114 21:10:26 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.114 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.114 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.114 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.114 21:10:26 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:12.114 21:10:26 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.114 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.114 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.114 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.114 21:10:26 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.114 { 00:05:12.114 "name": "Malloc2", 00:05:12.114 "aliases": [ 00:05:12.114 "8202a79f-7938-4c20-96ba-227cfd791f3a" 00:05:12.114 ], 00:05:12.114 "product_name": "Malloc disk", 00:05:12.114 "block_size": 512, 00:05:12.114 "num_blocks": 16384, 00:05:12.114 "uuid": "8202a79f-7938-4c20-96ba-227cfd791f3a", 00:05:12.114 "assigned_rate_limits": { 00:05:12.114 "rw_ios_per_sec": 0, 00:05:12.114 "rw_mbytes_per_sec": 0, 00:05:12.114 "r_mbytes_per_sec": 0, 00:05:12.114 "w_mbytes_per_sec": 0 00:05:12.114 }, 00:05:12.114 "claimed": false, 00:05:12.114 "zoned": false, 00:05:12.114 "supported_io_types": { 00:05:12.114 "read": true, 00:05:12.114 "write": true, 00:05:12.114 "unmap": true, 00:05:12.114 "write_zeroes": true, 00:05:12.114 "flush": true, 00:05:12.114 "reset": true, 00:05:12.114 "compare": false, 00:05:12.114 "compare_and_write": false, 00:05:12.114 "abort": true, 00:05:12.115 "nvme_admin": false, 00:05:12.115 "nvme_io": false 00:05:12.115 }, 00:05:12.115 "memory_domains": [ 00:05:12.115 { 00:05:12.115 "dma_device_id": "system", 00:05:12.115 "dma_device_type": 1 00:05:12.115 }, 00:05:12.115 { 00:05:12.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.115 "dma_device_type": 2 00:05:12.115 } 00:05:12.115 ], 00:05:12.115 "driver_specific": {} 00:05:12.115 } 00:05:12.115 ]' 00:05:12.115 21:10:26 -- rpc/rpc.sh@17 -- # jq length 00:05:12.115 21:10:26 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.115 21:10:26 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:12.115 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.115 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.115 [2024-04-24 21:10:26.873330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:12.115 [2024-04-24 21:10:26.873376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.115 [2024-04-24 21:10:26.873400] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021380 00:05:12.115 [2024-04-24 21:10:26.873410] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.115 [2024-04-24 21:10:26.875062] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.115 [2024-04-24 21:10:26.875088] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.115 Passthru0 00:05:12.115 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.115 21:10:26 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.115 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.115 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.115 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.115 21:10:26 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.115 { 00:05:12.115 "name": "Malloc2", 00:05:12.115 "aliases": [ 00:05:12.115 "8202a79f-7938-4c20-96ba-227cfd791f3a" 00:05:12.115 ], 00:05:12.115 "product_name": "Malloc disk", 00:05:12.115 "block_size": 512, 00:05:12.115 "num_blocks": 16384, 00:05:12.115 "uuid": "8202a79f-7938-4c20-96ba-227cfd791f3a", 00:05:12.115 "assigned_rate_limits": { 00:05:12.115 "rw_ios_per_sec": 0, 00:05:12.115 "rw_mbytes_per_sec": 0, 00:05:12.115 "r_mbytes_per_sec": 0, 00:05:12.115 "w_mbytes_per_sec": 0 00:05:12.115 }, 00:05:12.115 "claimed": true, 00:05:12.115 "claim_type": "exclusive_write", 00:05:12.115 "zoned": false, 00:05:12.115 "supported_io_types": { 00:05:12.115 "read": true, 00:05:12.115 "write": true, 00:05:12.115 "unmap": true, 00:05:12.115 "write_zeroes": true, 00:05:12.115 "flush": true, 00:05:12.115 "reset": true, 00:05:12.115 "compare": false, 00:05:12.115 "compare_and_write": false, 00:05:12.115 "abort": true, 00:05:12.115 "nvme_admin": false, 00:05:12.115 "nvme_io": false 00:05:12.115 }, 00:05:12.115 "memory_domains": [ 00:05:12.115 { 00:05:12.115 "dma_device_id": "system", 00:05:12.115 "dma_device_type": 1 00:05:12.115 }, 00:05:12.115 { 00:05:12.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.115 "dma_device_type": 2 00:05:12.115 } 00:05:12.115 ], 00:05:12.115 "driver_specific": {} 00:05:12.115 }, 00:05:12.115 { 00:05:12.115 "name": "Passthru0", 00:05:12.115 "aliases": [ 00:05:12.115 "c39af29e-5614-5c21-b0b9-dfdb1c608c10" 00:05:12.115 ], 00:05:12.115 "product_name": "passthru", 00:05:12.115 "block_size": 512, 00:05:12.115 "num_blocks": 16384, 00:05:12.115 "uuid": "c39af29e-5614-5c21-b0b9-dfdb1c608c10", 00:05:12.115 "assigned_rate_limits": { 00:05:12.115 "rw_ios_per_sec": 0, 00:05:12.115 "rw_mbytes_per_sec": 0, 00:05:12.115 "r_mbytes_per_sec": 0, 00:05:12.115 "w_mbytes_per_sec": 0 00:05:12.115 }, 00:05:12.115 "claimed": false, 00:05:12.115 "zoned": false, 00:05:12.115 "supported_io_types": { 00:05:12.115 "read": true, 00:05:12.115 "write": true, 00:05:12.115 "unmap": true, 00:05:12.115 "write_zeroes": true, 00:05:12.115 "flush": true, 00:05:12.115 "reset": true, 00:05:12.115 "compare": false, 00:05:12.115 "compare_and_write": false, 00:05:12.115 "abort": true, 00:05:12.115 "nvme_admin": false, 00:05:12.115 "nvme_io": false 00:05:12.115 }, 00:05:12.115 "memory_domains": [ 00:05:12.115 { 00:05:12.115 "dma_device_id": "system", 00:05:12.115 "dma_device_type": 1 00:05:12.115 }, 00:05:12.115 { 00:05:12.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.115 "dma_device_type": 2 00:05:12.115 } 00:05:12.115 ], 00:05:12.115 "driver_specific": { 00:05:12.115 "passthru": { 00:05:12.115 "name": "Passthru0", 00:05:12.115 "base_bdev_name": "Malloc2" 00:05:12.115 } 00:05:12.115 } 00:05:12.115 } 00:05:12.115 ]' 00:05:12.115 21:10:26 -- rpc/rpc.sh@21 -- # jq length 00:05:12.115 21:10:26 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.115 21:10:26 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.115 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.115 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.115 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.115 21:10:26 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:12.115 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.115 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.115 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.115 21:10:26 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.115 21:10:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.115 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.115 21:10:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.115 21:10:26 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.115 21:10:26 -- rpc/rpc.sh@26 -- # jq length 00:05:12.115 21:10:26 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.115 00:05:12.115 real 0m0.218s 00:05:12.115 user 0m0.125s 00:05:12.115 sys 0m0.032s 00:05:12.115 21:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:12.115 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.115 ************************************ 00:05:12.115 END TEST rpc_daemon_integrity 00:05:12.115 ************************************ 00:05:12.115 21:10:27 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:12.115 21:10:27 -- rpc/rpc.sh@84 -- # killprocess 1007177 00:05:12.115 21:10:27 -- common/autotest_common.sh@936 -- # '[' -z 1007177 ']' 00:05:12.115 21:10:27 -- common/autotest_common.sh@940 -- # kill -0 1007177 00:05:12.115 21:10:27 -- common/autotest_common.sh@941 -- # uname 00:05:12.115 21:10:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:12.115 21:10:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1007177 00:05:12.115 21:10:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:12.115 21:10:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:12.115 21:10:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1007177' 00:05:12.115 killing process with pid 1007177 00:05:12.115 21:10:27 -- common/autotest_common.sh@955 -- # kill 1007177 00:05:12.115 21:10:27 -- common/autotest_common.sh@960 -- # wait 1007177 00:05:13.070 00:05:13.070 real 0m2.984s 00:05:13.070 user 0m3.414s 00:05:13.070 sys 0m0.908s 00:05:13.070 21:10:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.070 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:05:13.070 ************************************ 00:05:13.070 END TEST rpc 00:05:13.070 ************************************ 00:05:13.070 21:10:27 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:13.070 21:10:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.070 21:10:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.070 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:05:13.070 ************************************ 00:05:13.070 START TEST skip_rpc 00:05:13.070 ************************************ 00:05:13.070 21:10:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:13.331 * Looking for test storage... 00:05:13.331 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:05:13.331 21:10:28 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:05:13.331 21:10:28 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:05:13.331 21:10:28 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:13.331 21:10:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.331 21:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.331 21:10:28 -- common/autotest_common.sh@10 -- # set +x 00:05:13.331 ************************************ 00:05:13.331 START TEST skip_rpc 00:05:13.331 ************************************ 00:05:13.331 21:10:28 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:05:13.331 21:10:28 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1008002 00:05:13.331 21:10:28 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.331 21:10:28 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:13.331 21:10:28 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:13.591 [2024-04-24 21:10:28.320792] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:05:13.591 [2024-04-24 21:10:28.320932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008002 ] 00:05:13.591 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.591 [2024-04-24 21:10:28.461041] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.851 [2024-04-24 21:10:28.577003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.227 21:10:33 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.227 21:10:33 -- common/autotest_common.sh@638 -- # local es=0 00:05:19.227 21:10:33 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.227 21:10:33 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:19.227 21:10:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:19.227 21:10:33 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:19.227 21:10:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:19.227 21:10:33 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:19.227 21:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.227 21:10:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.227 21:10:33 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:19.227 21:10:33 -- common/autotest_common.sh@641 -- # es=1 00:05:19.227 21:10:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:19.227 21:10:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:19.227 21:10:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:19.227 21:10:33 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.227 21:10:33 -- rpc/skip_rpc.sh@23 -- # killprocess 1008002 00:05:19.227 21:10:33 -- common/autotest_common.sh@936 -- # '[' -z 1008002 ']' 00:05:19.227 21:10:33 -- common/autotest_common.sh@940 -- # kill -0 1008002 00:05:19.227 21:10:33 -- common/autotest_common.sh@941 -- # uname 00:05:19.227 21:10:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:19.227 21:10:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1008002 00:05:19.227 21:10:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:19.227 21:10:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:19.227 21:10:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1008002' 00:05:19.227 killing process with pid 1008002 00:05:19.227 21:10:33 -- common/autotest_common.sh@955 -- # kill 1008002 00:05:19.227 21:10:33 -- common/autotest_common.sh@960 -- # wait 1008002 00:05:19.227 00:05:19.227 real 0m5.876s 00:05:19.227 user 0m5.516s 00:05:19.227 sys 0m0.373s 00:05:19.227 21:10:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.227 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:05:19.227 ************************************ 00:05:19.227 END TEST skip_rpc 00:05:19.227 ************************************ 00:05:19.227 21:10:34 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.227 21:10:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.227 21:10:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.227 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:05:19.486 ************************************ 00:05:19.486 START TEST skip_rpc_with_json 00:05:19.486 ************************************ 00:05:19.486 21:10:34 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:19.487 21:10:34 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.487 21:10:34 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1009219 00:05:19.487 21:10:34 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.487 21:10:34 -- rpc/skip_rpc.sh@31 -- # waitforlisten 1009219 00:05:19.487 21:10:34 -- common/autotest_common.sh@817 -- # '[' -z 1009219 ']' 00:05:19.487 21:10:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.487 21:10:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:19.487 21:10:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.487 21:10:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:19.487 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:05:19.487 21:10:34 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.487 [2024-04-24 21:10:34.301094] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:05:19.487 [2024-04-24 21:10:34.301205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009219 ] 00:05:19.487 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.487 [2024-04-24 21:10:34.419422] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.746 [2024-04-24 21:10:34.516552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.319 21:10:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:20.319 21:10:35 -- common/autotest_common.sh@850 -- # return 0 00:05:20.319 21:10:35 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:20.319 21:10:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.319 21:10:35 -- common/autotest_common.sh@10 -- # set +x 00:05:20.319 [2024-04-24 21:10:35.009569] nvmf_rpc.c:2504:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:20.319 request: 00:05:20.319 { 00:05:20.319 "trtype": "tcp", 00:05:20.319 "method": "nvmf_get_transports", 00:05:20.319 "req_id": 1 00:05:20.319 } 00:05:20.319 Got JSON-RPC error response 00:05:20.319 response: 00:05:20.319 { 00:05:20.319 "code": -19, 00:05:20.319 "message": "No such device" 00:05:20.319 } 00:05:20.319 21:10:35 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:20.319 21:10:35 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:20.319 21:10:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.319 21:10:35 -- common/autotest_common.sh@10 -- # set +x 00:05:20.319 [2024-04-24 21:10:35.017675] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.319 21:10:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.319 21:10:35 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:20.319 21:10:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:20.319 21:10:35 -- common/autotest_common.sh@10 -- # set +x 00:05:20.319 21:10:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:20.319 21:10:35 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:05:20.319 { 00:05:20.319 "subsystems": [ 00:05:20.319 { 00:05:20.319 "subsystem": "keyring", 00:05:20.319 "config": [] 00:05:20.319 }, 00:05:20.319 { 00:05:20.319 "subsystem": "iobuf", 00:05:20.319 "config": [ 00:05:20.319 { 00:05:20.319 "method": "iobuf_set_options", 00:05:20.319 "params": { 00:05:20.319 "small_pool_count": 8192, 00:05:20.319 "large_pool_count": 1024, 00:05:20.319 "small_bufsize": 8192, 00:05:20.319 "large_bufsize": 135168 00:05:20.319 } 00:05:20.319 } 00:05:20.319 ] 00:05:20.319 }, 00:05:20.319 { 00:05:20.319 "subsystem": "sock", 00:05:20.319 "config": [ 00:05:20.319 { 00:05:20.319 "method": "sock_impl_set_options", 00:05:20.319 "params": { 00:05:20.319 "impl_name": "posix", 00:05:20.319 "recv_buf_size": 2097152, 00:05:20.319 "send_buf_size": 2097152, 00:05:20.319 "enable_recv_pipe": true, 00:05:20.319 "enable_quickack": false, 00:05:20.319 "enable_placement_id": 0, 00:05:20.319 "enable_zerocopy_send_server": true, 00:05:20.319 "enable_zerocopy_send_client": false, 00:05:20.319 "zerocopy_threshold": 0, 00:05:20.319 "tls_version": 0, 00:05:20.319 "enable_ktls": false 00:05:20.319 } 00:05:20.319 }, 00:05:20.319 { 00:05:20.319 "method": "sock_impl_set_options", 00:05:20.319 "params": { 00:05:20.319 "impl_name": "ssl", 00:05:20.319 "recv_buf_size": 4096, 00:05:20.319 "send_buf_size": 4096, 00:05:20.319 "enable_recv_pipe": true, 00:05:20.319 "enable_quickack": false, 00:05:20.319 "enable_placement_id": 0, 00:05:20.319 "enable_zerocopy_send_server": true, 00:05:20.319 "enable_zerocopy_send_client": false, 00:05:20.319 "zerocopy_threshold": 0, 00:05:20.319 "tls_version": 0, 00:05:20.319 "enable_ktls": false 00:05:20.319 } 00:05:20.319 } 00:05:20.319 ] 00:05:20.319 }, 00:05:20.319 { 00:05:20.319 "subsystem": "vmd", 00:05:20.319 "config": [] 00:05:20.319 }, 00:05:20.319 { 00:05:20.319 "subsystem": "accel", 00:05:20.319 "config": [ 00:05:20.319 { 00:05:20.319 "method": "accel_set_options", 00:05:20.319 "params": { 00:05:20.319 "small_cache_size": 128, 00:05:20.319 "large_cache_size": 16, 00:05:20.319 "task_count": 2048, 00:05:20.319 "sequence_count": 2048, 00:05:20.319 "buf_count": 2048 00:05:20.319 } 00:05:20.319 } 00:05:20.319 ] 00:05:20.319 }, 00:05:20.319 { 00:05:20.319 "subsystem": "bdev", 00:05:20.319 "config": [ 00:05:20.319 { 00:05:20.319 "method": "bdev_set_options", 00:05:20.319 "params": { 00:05:20.319 "bdev_io_pool_size": 65535, 00:05:20.319 "bdev_io_cache_size": 256, 00:05:20.319 "bdev_auto_examine": true, 00:05:20.319 "iobuf_small_cache_size": 128, 00:05:20.319 "iobuf_large_cache_size": 16 00:05:20.319 } 00:05:20.319 }, 00:05:20.319 { 00:05:20.319 "method": "bdev_raid_set_options", 00:05:20.319 "params": { 00:05:20.319 "process_window_size_kb": 1024 00:05:20.319 } 00:05:20.319 }, 00:05:20.319 { 00:05:20.319 "method": "bdev_iscsi_set_options", 00:05:20.319 "params": { 00:05:20.319 "timeout_sec": 30 00:05:20.319 } 00:05:20.319 }, 00:05:20.319 { 00:05:20.319 "method": "bdev_nvme_set_options", 00:05:20.319 "params": { 00:05:20.319 "action_on_timeout": "none", 00:05:20.319 "timeout_us": 0, 00:05:20.319 "timeout_admin_us": 0, 00:05:20.319 "keep_alive_timeout_ms": 10000, 00:05:20.319 "arbitration_burst": 0, 00:05:20.319 "low_priority_weight": 0, 00:05:20.319 "medium_priority_weight": 0, 00:05:20.319 "high_priority_weight": 0, 00:05:20.319 "nvme_adminq_poll_period_us": 10000, 00:05:20.319 "nvme_ioq_poll_period_us": 0, 00:05:20.319 "io_queue_requests": 0, 00:05:20.319 "delay_cmd_submit": true, 00:05:20.319 "transport_retry_count": 4, 00:05:20.319 "bdev_retry_count": 3, 00:05:20.319 "transport_ack_timeout": 0, 00:05:20.319 "ctrlr_loss_timeout_sec": 0, 00:05:20.319 "reconnect_delay_sec": 0, 00:05:20.319 "fast_io_fail_timeout_sec": 0, 00:05:20.319 "disable_auto_failback": false, 00:05:20.319 "generate_uuids": false, 00:05:20.319 "transport_tos": 0, 00:05:20.319 "nvme_error_stat": false, 00:05:20.319 "rdma_srq_size": 0, 00:05:20.319 "io_path_stat": false, 00:05:20.319 "allow_accel_sequence": false, 00:05:20.319 "rdma_max_cq_size": 0, 00:05:20.319 "rdma_cm_event_timeout_ms": 0, 00:05:20.319 "dhchap_digests": [ 00:05:20.320 "sha256", 00:05:20.320 "sha384", 00:05:20.320 "sha512" 00:05:20.320 ], 00:05:20.320 "dhchap_dhgroups": [ 00:05:20.320 "null", 00:05:20.320 "ffdhe2048", 00:05:20.320 "ffdhe3072", 00:05:20.320 "ffdhe4096", 00:05:20.320 "ffdhe6144", 00:05:20.320 "ffdhe8192" 00:05:20.320 ] 00:05:20.320 } 00:05:20.320 }, 00:05:20.320 { 00:05:20.320 "method": "bdev_nvme_set_hotplug", 00:05:20.320 "params": { 00:05:20.320 "period_us": 100000, 00:05:20.320 "enable": false 00:05:20.320 } 00:05:20.320 }, 00:05:20.320 { 00:05:20.320 "method": "bdev_wait_for_examine" 00:05:20.320 } 00:05:20.320 ] 00:05:20.320 }, 00:05:20.320 { 00:05:20.320 "subsystem": "scsi", 00:05:20.320 "config": null 00:05:20.320 }, 00:05:20.320 { 00:05:20.320 "subsystem": "scheduler", 00:05:20.320 "config": [ 00:05:20.320 { 00:05:20.320 "method": "framework_set_scheduler", 00:05:20.320 "params": { 00:05:20.320 "name": "static" 00:05:20.320 } 00:05:20.320 } 00:05:20.320 ] 00:05:20.320 }, 00:05:20.320 { 00:05:20.320 "subsystem": "vhost_scsi", 00:05:20.320 "config": [] 00:05:20.320 }, 00:05:20.320 { 00:05:20.320 "subsystem": "vhost_blk", 00:05:20.320 "config": [] 00:05:20.320 }, 00:05:20.320 { 00:05:20.320 "subsystem": "ublk", 00:05:20.320 "config": [] 00:05:20.320 }, 00:05:20.320 { 00:05:20.320 "subsystem": "nbd", 00:05:20.320 "config": [] 00:05:20.320 }, 00:05:20.320 { 00:05:20.320 "subsystem": "nvmf", 00:05:20.320 "config": [ 00:05:20.320 { 00:05:20.320 "method": "nvmf_set_config", 00:05:20.320 "params": { 00:05:20.320 "discovery_filter": "match_any", 00:05:20.320 "admin_cmd_passthru": { 00:05:20.320 "identify_ctrlr": false 00:05:20.320 } 00:05:20.320 } 00:05:20.320 }, 00:05:20.320 { 00:05:20.320 "method": "nvmf_set_max_subsystems", 00:05:20.320 "params": { 00:05:20.320 "max_subsystems": 1024 00:05:20.320 } 00:05:20.320 }, 00:05:20.320 { 00:05:20.320 "method": "nvmf_set_crdt", 00:05:20.320 "params": { 00:05:20.320 "crdt1": 0, 00:05:20.320 "crdt2": 0, 00:05:20.320 "crdt3": 0 00:05:20.320 } 00:05:20.320 }, 00:05:20.320 { 00:05:20.320 "method": "nvmf_create_transport", 00:05:20.320 "params": { 00:05:20.320 "trtype": "TCP", 00:05:20.320 "max_queue_depth": 128, 00:05:20.320 "max_io_qpairs_per_ctrlr": 127, 00:05:20.320 "in_capsule_data_size": 4096, 00:05:20.320 "max_io_size": 131072, 00:05:20.320 "io_unit_size": 131072, 00:05:20.320 "max_aq_depth": 128, 00:05:20.320 "num_shared_buffers": 511, 00:05:20.320 "buf_cache_size": 4294967295, 00:05:20.320 "dif_insert_or_strip": false, 00:05:20.320 "zcopy": false, 00:05:20.320 "c2h_success": true, 00:05:20.320 "sock_priority": 0, 00:05:20.320 "abort_timeout_sec": 1, 00:05:20.320 "ack_timeout": 0 00:05:20.320 } 00:05:20.320 } 00:05:20.320 ] 00:05:20.320 }, 00:05:20.320 { 00:05:20.320 "subsystem": "iscsi", 00:05:20.320 "config": [ 00:05:20.320 { 00:05:20.320 "method": "iscsi_set_options", 00:05:20.320 "params": { 00:05:20.320 "node_base": "iqn.2016-06.io.spdk", 00:05:20.320 "max_sessions": 128, 00:05:20.320 "max_connections_per_session": 2, 00:05:20.320 "max_queue_depth": 64, 00:05:20.320 "default_time2wait": 2, 00:05:20.320 "default_time2retain": 20, 00:05:20.320 "first_burst_length": 8192, 00:05:20.320 "immediate_data": true, 00:05:20.320 "allow_duplicated_isid": false, 00:05:20.320 "error_recovery_level": 0, 00:05:20.320 "nop_timeout": 60, 00:05:20.320 "nop_in_interval": 30, 00:05:20.320 "disable_chap": false, 00:05:20.320 "require_chap": false, 00:05:20.320 "mutual_chap": false, 00:05:20.320 "chap_group": 0, 00:05:20.320 "max_large_datain_per_connection": 64, 00:05:20.320 "max_r2t_per_connection": 4, 00:05:20.320 "pdu_pool_size": 36864, 00:05:20.320 "immediate_data_pool_size": 16384, 00:05:20.320 "data_out_pool_size": 2048 00:05:20.320 } 00:05:20.320 } 00:05:20.320 ] 00:05:20.320 } 00:05:20.320 ] 00:05:20.320 } 00:05:20.320 21:10:35 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:20.320 21:10:35 -- rpc/skip_rpc.sh@40 -- # killprocess 1009219 00:05:20.320 21:10:35 -- common/autotest_common.sh@936 -- # '[' -z 1009219 ']' 00:05:20.320 21:10:35 -- common/autotest_common.sh@940 -- # kill -0 1009219 00:05:20.320 21:10:35 -- common/autotest_common.sh@941 -- # uname 00:05:20.320 21:10:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.320 21:10:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1009219 00:05:20.320 21:10:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.320 21:10:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.320 21:10:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1009219' 00:05:20.320 killing process with pid 1009219 00:05:20.320 21:10:35 -- common/autotest_common.sh@955 -- # kill 1009219 00:05:20.320 21:10:35 -- common/autotest_common.sh@960 -- # wait 1009219 00:05:21.262 21:10:36 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1009664 00:05:21.262 21:10:36 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:21.262 21:10:36 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:05:26.552 21:10:41 -- rpc/skip_rpc.sh@50 -- # killprocess 1009664 00:05:26.552 21:10:41 -- common/autotest_common.sh@936 -- # '[' -z 1009664 ']' 00:05:26.552 21:10:41 -- common/autotest_common.sh@940 -- # kill -0 1009664 00:05:26.552 21:10:41 -- common/autotest_common.sh@941 -- # uname 00:05:26.552 21:10:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.552 21:10:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1009664 00:05:26.552 21:10:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.552 21:10:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.552 21:10:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1009664' 00:05:26.552 killing process with pid 1009664 00:05:26.552 21:10:41 -- common/autotest_common.sh@955 -- # kill 1009664 00:05:26.552 21:10:41 -- common/autotest_common.sh@960 -- # wait 1009664 00:05:27.122 21:10:41 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:05:27.122 21:10:41 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:05:27.122 00:05:27.122 real 0m7.754s 00:05:27.122 user 0m7.363s 00:05:27.122 sys 0m0.719s 00:05:27.122 21:10:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.122 21:10:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.122 ************************************ 00:05:27.122 END TEST skip_rpc_with_json 00:05:27.122 ************************************ 00:05:27.122 21:10:41 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:27.122 21:10:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.123 21:10:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.123 21:10:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.384 ************************************ 00:05:27.384 START TEST skip_rpc_with_delay 00:05:27.384 ************************************ 00:05:27.384 21:10:42 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:27.384 21:10:42 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.384 21:10:42 -- common/autotest_common.sh@638 -- # local es=0 00:05:27.384 21:10:42 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.384 21:10:42 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.384 21:10:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.384 21:10:42 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.384 21:10:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.384 21:10:42 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.384 21:10:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.384 21:10:42 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.384 21:10:42 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:27.384 21:10:42 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.384 [2024-04-24 21:10:42.176821] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:27.384 [2024-04-24 21:10:42.176948] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:27.384 21:10:42 -- common/autotest_common.sh@641 -- # es=1 00:05:27.384 21:10:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:27.384 21:10:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:27.384 21:10:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:27.384 00:05:27.384 real 0m0.129s 00:05:27.384 user 0m0.068s 00:05:27.384 sys 0m0.060s 00:05:27.384 21:10:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.384 21:10:42 -- common/autotest_common.sh@10 -- # set +x 00:05:27.384 ************************************ 00:05:27.384 END TEST skip_rpc_with_delay 00:05:27.384 ************************************ 00:05:27.384 21:10:42 -- rpc/skip_rpc.sh@77 -- # uname 00:05:27.384 21:10:42 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:27.384 21:10:42 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:27.384 21:10:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.384 21:10:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.384 21:10:42 -- common/autotest_common.sh@10 -- # set +x 00:05:27.384 ************************************ 00:05:27.384 START TEST exit_on_failed_rpc_init 00:05:27.384 ************************************ 00:05:27.384 21:10:42 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:27.384 21:10:42 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1010963 00:05:27.384 21:10:42 -- rpc/skip_rpc.sh@63 -- # waitforlisten 1010963 00:05:27.384 21:10:42 -- common/autotest_common.sh@817 -- # '[' -z 1010963 ']' 00:05:27.384 21:10:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.384 21:10:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.384 21:10:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.385 21:10:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.385 21:10:42 -- common/autotest_common.sh@10 -- # set +x 00:05:27.385 21:10:42 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.646 [2024-04-24 21:10:42.424582] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:05:27.646 [2024-04-24 21:10:42.424689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010963 ] 00:05:27.646 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.646 [2024-04-24 21:10:42.542060] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.907 [2024-04-24 21:10:42.637997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.168 21:10:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:28.168 21:10:43 -- common/autotest_common.sh@850 -- # return 0 00:05:28.168 21:10:43 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.168 21:10:43 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.168 21:10:43 -- common/autotest_common.sh@638 -- # local es=0 00:05:28.168 21:10:43 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.168 21:10:43 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.168 21:10:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:28.168 21:10:43 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.168 21:10:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:28.168 21:10:43 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.168 21:10:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:28.168 21:10:43 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.168 21:10:43 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:28.168 21:10:43 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.429 [2024-04-24 21:10:43.196582] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:05:28.430 [2024-04-24 21:10:43.196692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011098 ] 00:05:28.430 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.430 [2024-04-24 21:10:43.310569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.691 [2024-04-24 21:10:43.405325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.691 [2024-04-24 21:10:43.405395] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:28.691 [2024-04-24 21:10:43.405412] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:28.691 [2024-04-24 21:10:43.405422] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:28.691 21:10:43 -- common/autotest_common.sh@641 -- # es=234 00:05:28.691 21:10:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:28.691 21:10:43 -- common/autotest_common.sh@650 -- # es=106 00:05:28.691 21:10:43 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:28.691 21:10:43 -- common/autotest_common.sh@658 -- # es=1 00:05:28.691 21:10:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:28.691 21:10:43 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:28.691 21:10:43 -- rpc/skip_rpc.sh@70 -- # killprocess 1010963 00:05:28.691 21:10:43 -- common/autotest_common.sh@936 -- # '[' -z 1010963 ']' 00:05:28.691 21:10:43 -- common/autotest_common.sh@940 -- # kill -0 1010963 00:05:28.691 21:10:43 -- common/autotest_common.sh@941 -- # uname 00:05:28.691 21:10:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.691 21:10:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1010963 00:05:28.691 21:10:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.691 21:10:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.691 21:10:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1010963' 00:05:28.691 killing process with pid 1010963 00:05:28.691 21:10:43 -- common/autotest_common.sh@955 -- # kill 1010963 00:05:28.691 21:10:43 -- common/autotest_common.sh@960 -- # wait 1010963 00:05:29.633 00:05:29.633 real 0m2.119s 00:05:29.633 user 0m2.322s 00:05:29.633 sys 0m0.541s 00:05:29.633 21:10:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.633 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:29.633 ************************************ 00:05:29.633 END TEST exit_on_failed_rpc_init 00:05:29.634 ************************************ 00:05:29.634 21:10:44 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:05:29.634 00:05:29.634 real 0m16.456s 00:05:29.634 user 0m15.478s 00:05:29.634 sys 0m2.035s 00:05:29.634 21:10:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.634 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:29.634 ************************************ 00:05:29.634 END TEST skip_rpc 00:05:29.634 ************************************ 00:05:29.634 21:10:44 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:29.634 21:10:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.634 21:10:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.634 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:29.895 ************************************ 00:05:29.895 START TEST rpc_client 00:05:29.895 ************************************ 00:05:29.895 21:10:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:29.895 * Looking for test storage... 00:05:29.895 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client 00:05:29.895 21:10:44 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:29.895 OK 00:05:29.895 21:10:44 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:29.895 00:05:29.895 real 0m0.133s 00:05:29.895 user 0m0.046s 00:05:29.895 sys 0m0.091s 00:05:29.895 21:10:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.895 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:29.895 ************************************ 00:05:29.895 END TEST rpc_client 00:05:29.895 ************************************ 00:05:29.895 21:10:44 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:05:29.895 21:10:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.895 21:10:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.895 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.157 ************************************ 00:05:30.157 START TEST json_config 00:05:30.157 ************************************ 00:05:30.157 21:10:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:05:30.157 21:10:44 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.157 21:10:44 -- nvmf/common.sh@7 -- # uname -s 00:05:30.157 21:10:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.157 21:10:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.157 21:10:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.157 21:10:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.157 21:10:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.157 21:10:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.157 21:10:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.157 21:10:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.157 21:10:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.157 21:10:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.157 21:10:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:05:30.157 21:10:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:05:30.157 21:10:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.157 21:10:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.157 21:10:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:30.157 21:10:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.157 21:10:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:05:30.157 21:10:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.157 21:10:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.157 21:10:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.157 21:10:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.157 21:10:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.157 21:10:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.157 21:10:44 -- paths/export.sh@5 -- # export PATH 00:05:30.157 21:10:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.157 21:10:44 -- nvmf/common.sh@47 -- # : 0 00:05:30.157 21:10:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:30.157 21:10:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:30.157 21:10:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.157 21:10:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.157 21:10:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.157 21:10:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:30.157 21:10:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:30.157 21:10:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:30.157 21:10:44 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/common.sh 00:05:30.157 21:10:44 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:30.157 21:10:44 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:30.157 21:10:44 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:30.157 21:10:44 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:30.157 21:10:44 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:30.157 21:10:44 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:30.157 21:10:44 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:30.157 21:10:44 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:30.157 21:10:44 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:30.158 21:10:44 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:30.158 21:10:44 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json') 00:05:30.158 21:10:44 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:30.158 21:10:44 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:30.158 21:10:44 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:30.158 21:10:44 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:30.158 INFO: JSON configuration test init 00:05:30.158 21:10:44 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:30.158 21:10:44 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:30.158 21:10:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:30.158 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.158 21:10:44 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:30.158 21:10:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:30.158 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.158 21:10:44 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:30.158 21:10:44 -- json_config/common.sh@9 -- # local app=target 00:05:30.158 21:10:44 -- json_config/common.sh@10 -- # shift 00:05:30.158 21:10:44 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:30.158 21:10:44 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:30.158 21:10:44 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:30.158 21:10:44 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.158 21:10:44 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.158 21:10:44 -- json_config/common.sh@22 -- # app_pid["$app"]=1011525 00:05:30.158 21:10:44 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:30.158 Waiting for target to run... 00:05:30.158 21:10:44 -- json_config/common.sh@25 -- # waitforlisten 1011525 /var/tmp/spdk_tgt.sock 00:05:30.158 21:10:44 -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:30.158 21:10:44 -- common/autotest_common.sh@817 -- # '[' -z 1011525 ']' 00:05:30.158 21:10:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.158 21:10:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:30.158 21:10:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.158 21:10:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:30.158 21:10:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.158 [2024-04-24 21:10:45.028920] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:05:30.158 [2024-04-24 21:10:45.028995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011525 ] 00:05:30.158 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.419 [2024-04-24 21:10:45.278384] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.419 [2024-04-24 21:10:45.369078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.989 21:10:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.989 21:10:45 -- common/autotest_common.sh@850 -- # return 0 00:05:30.989 21:10:45 -- json_config/common.sh@26 -- # echo '' 00:05:30.989 00:05:30.989 21:10:45 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:30.989 21:10:45 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:30.990 21:10:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:30.990 21:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:30.990 21:10:45 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:30.990 21:10:45 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:30.990 21:10:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:30.990 21:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:30.990 21:10:45 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:30.990 21:10:45 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:30.990 21:10:45 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:40.985 21:10:54 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:40.985 21:10:54 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:40.985 21:10:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:40.985 21:10:54 -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 21:10:54 -- json_config/json_config.sh@45 -- # local ret=0 00:05:40.985 21:10:54 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:40.985 21:10:54 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:40.985 21:10:54 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:40.985 21:10:54 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:40.985 21:10:54 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:40.985 21:10:54 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:40.985 21:10:54 -- json_config/json_config.sh@48 -- # local get_types 00:05:40.985 21:10:54 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:40.985 21:10:54 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:40.985 21:10:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:40.985 21:10:54 -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 21:10:54 -- json_config/json_config.sh@55 -- # return 0 00:05:40.985 21:10:54 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:40.985 21:10:54 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:40.985 21:10:54 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:40.985 21:10:54 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:40.985 21:10:54 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:40.985 21:10:54 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:40.985 21:10:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:40.985 21:10:54 -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 21:10:54 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:40.985 21:10:54 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:40.985 21:10:54 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:40.985 21:10:54 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:40.985 21:10:54 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:40.985 MallocForNvmf0 00:05:40.985 21:10:55 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:40.985 21:10:55 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:40.985 MallocForNvmf1 00:05:40.985 21:10:55 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:40.985 21:10:55 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:40.986 [2024-04-24 21:10:55.346990] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.986 21:10:55 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:40.986 21:10:55 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:40.986 21:10:55 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:40.986 21:10:55 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:40.986 21:10:55 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:40.986 21:10:55 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:40.986 21:10:55 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:40.986 21:10:55 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:40.986 [2024-04-24 21:10:55.943493] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.246 21:10:55 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:41.246 21:10:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:41.246 21:10:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.246 21:10:55 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:41.246 21:10:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:41.246 21:10:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.246 21:10:56 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:41.246 21:10:56 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:41.246 21:10:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:41.246 MallocBdevForConfigChangeCheck 00:05:41.246 21:10:56 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:41.246 21:10:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:41.246 21:10:56 -- common/autotest_common.sh@10 -- # set +x 00:05:41.506 21:10:56 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:41.506 21:10:56 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.767 21:10:56 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:41.767 INFO: shutting down applications... 00:05:41.767 21:10:56 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:41.767 21:10:56 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:41.767 21:10:56 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:41.767 21:10:56 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:49.899 Calling clear_iscsi_subsystem 00:05:49.899 Calling clear_nvmf_subsystem 00:05:49.899 Calling clear_nbd_subsystem 00:05:49.899 Calling clear_ublk_subsystem 00:05:49.899 Calling clear_vhost_blk_subsystem 00:05:49.899 Calling clear_vhost_scsi_subsystem 00:05:49.899 Calling clear_bdev_subsystem 00:05:49.899 21:11:03 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py 00:05:49.899 21:11:03 -- json_config/json_config.sh@343 -- # count=100 00:05:49.899 21:11:03 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:49.899 21:11:03 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:49.899 21:11:03 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:49.899 21:11:03 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:49.899 21:11:03 -- json_config/json_config.sh@345 -- # break 00:05:49.899 21:11:03 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:49.899 21:11:03 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:49.899 21:11:03 -- json_config/common.sh@31 -- # local app=target 00:05:49.899 21:11:03 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:49.899 21:11:03 -- json_config/common.sh@35 -- # [[ -n 1011525 ]] 00:05:49.899 21:11:03 -- json_config/common.sh@38 -- # kill -SIGINT 1011525 00:05:49.899 21:11:03 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:49.899 21:11:03 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.899 21:11:03 -- json_config/common.sh@41 -- # kill -0 1011525 00:05:49.899 21:11:03 -- json_config/common.sh@45 -- # sleep 0.5 00:05:49.899 21:11:04 -- json_config/common.sh@40 -- # (( i++ )) 00:05:49.899 21:11:04 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.899 21:11:04 -- json_config/common.sh@41 -- # kill -0 1011525 00:05:49.899 21:11:04 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:49.899 21:11:04 -- json_config/common.sh@43 -- # break 00:05:49.899 21:11:04 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:49.899 21:11:04 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:49.899 SPDK target shutdown done 00:05:49.899 21:11:04 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:49.899 INFO: relaunching applications... 00:05:49.899 21:11:04 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:05:49.899 21:11:04 -- json_config/common.sh@9 -- # local app=target 00:05:49.899 21:11:04 -- json_config/common.sh@10 -- # shift 00:05:49.899 21:11:04 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:49.899 21:11:04 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:49.899 21:11:04 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:49.899 21:11:04 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.899 21:11:04 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.899 21:11:04 -- json_config/common.sh@22 -- # app_pid["$app"]=1015529 00:05:49.899 21:11:04 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:49.899 Waiting for target to run... 00:05:49.899 21:11:04 -- json_config/common.sh@25 -- # waitforlisten 1015529 /var/tmp/spdk_tgt.sock 00:05:49.899 21:11:04 -- common/autotest_common.sh@817 -- # '[' -z 1015529 ']' 00:05:49.899 21:11:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.899 21:11:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:49.899 21:11:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.899 21:11:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:49.899 21:11:04 -- common/autotest_common.sh@10 -- # set +x 00:05:49.899 21:11:04 -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:05:49.899 [2024-04-24 21:11:04.507698] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:05:49.899 [2024-04-24 21:11:04.507849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1015529 ] 00:05:49.899 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.159 [2024-04-24 21:11:04.877877] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.159 [2024-04-24 21:11:04.963502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.158 [2024-04-24 21:11:13.902163] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.158 [2024-04-24 21:11:13.934411] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:00.158 21:11:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:00.158 21:11:13 -- common/autotest_common.sh@850 -- # return 0 00:06:00.158 21:11:13 -- json_config/common.sh@26 -- # echo '' 00:06:00.158 00:06:00.158 21:11:13 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:00.158 21:11:13 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:00.158 INFO: Checking if target configuration is the same... 00:06:00.158 21:11:13 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.158 21:11:13 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:00.158 21:11:13 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:00.158 + '[' 2 -ne 2 ']' 00:06:00.158 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:00.158 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:06:00.158 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:06:00.158 +++ basename /dev/fd/62 00:06:00.158 ++ mktemp /tmp/62.XXX 00:06:00.158 + tmp_file_1=/tmp/62.vKt 00:06:00.158 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.158 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:00.158 + tmp_file_2=/tmp/spdk_tgt_config.json.WEw 00:06:00.158 + ret=0 00:06:00.158 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.158 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.158 + diff -u /tmp/62.vKt /tmp/spdk_tgt_config.json.WEw 00:06:00.158 + echo 'INFO: JSON config files are the same' 00:06:00.158 INFO: JSON config files are the same 00:06:00.158 + rm /tmp/62.vKt /tmp/spdk_tgt_config.json.WEw 00:06:00.158 + exit 0 00:06:00.158 21:11:14 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:00.158 21:11:14 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:00.158 INFO: changing configuration and checking if this can be detected... 00:06:00.158 21:11:14 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:00.158 21:11:14 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:00.158 21:11:14 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.158 21:11:14 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:00.158 21:11:14 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:00.158 + '[' 2 -ne 2 ']' 00:06:00.158 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:00.158 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:06:00.158 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:06:00.158 +++ basename /dev/fd/62 00:06:00.158 ++ mktemp /tmp/62.XXX 00:06:00.158 + tmp_file_1=/tmp/62.AnK 00:06:00.158 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.158 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:00.158 + tmp_file_2=/tmp/spdk_tgt_config.json.eji 00:06:00.158 + ret=0 00:06:00.158 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.158 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.158 + diff -u /tmp/62.AnK /tmp/spdk_tgt_config.json.eji 00:06:00.158 + ret=1 00:06:00.158 + echo '=== Start of file: /tmp/62.AnK ===' 00:06:00.158 + cat /tmp/62.AnK 00:06:00.158 + echo '=== End of file: /tmp/62.AnK ===' 00:06:00.158 + echo '' 00:06:00.158 + echo '=== Start of file: /tmp/spdk_tgt_config.json.eji ===' 00:06:00.158 + cat /tmp/spdk_tgt_config.json.eji 00:06:00.158 + echo '=== End of file: /tmp/spdk_tgt_config.json.eji ===' 00:06:00.158 + echo '' 00:06:00.158 + rm /tmp/62.AnK /tmp/spdk_tgt_config.json.eji 00:06:00.158 + exit 1 00:06:00.158 21:11:14 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:00.158 INFO: configuration change detected. 00:06:00.158 21:11:14 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:00.158 21:11:14 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:00.158 21:11:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:00.158 21:11:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.158 21:11:14 -- json_config/json_config.sh@307 -- # local ret=0 00:06:00.158 21:11:14 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:00.158 21:11:14 -- json_config/json_config.sh@317 -- # [[ -n 1015529 ]] 00:06:00.158 21:11:14 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:00.158 21:11:14 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:00.158 21:11:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:00.158 21:11:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.158 21:11:14 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:00.158 21:11:14 -- json_config/json_config.sh@193 -- # uname -s 00:06:00.158 21:11:14 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:00.158 21:11:14 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:00.158 21:11:14 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:00.158 21:11:14 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:00.158 21:11:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:00.158 21:11:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.158 21:11:14 -- json_config/json_config.sh@323 -- # killprocess 1015529 00:06:00.158 21:11:14 -- common/autotest_common.sh@936 -- # '[' -z 1015529 ']' 00:06:00.158 21:11:14 -- common/autotest_common.sh@940 -- # kill -0 1015529 00:06:00.158 21:11:14 -- common/autotest_common.sh@941 -- # uname 00:06:00.158 21:11:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.158 21:11:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1015529 00:06:00.158 21:11:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.158 21:11:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.158 21:11:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1015529' 00:06:00.158 killing process with pid 1015529 00:06:00.158 21:11:14 -- common/autotest_common.sh@955 -- # kill 1015529 00:06:00.159 21:11:14 -- common/autotest_common.sh@960 -- # wait 1015529 00:06:03.458 21:11:18 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.458 21:11:18 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:03.458 21:11:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:03.458 21:11:18 -- common/autotest_common.sh@10 -- # set +x 00:06:03.458 21:11:18 -- json_config/json_config.sh@328 -- # return 0 00:06:03.458 21:11:18 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:03.458 INFO: Success 00:06:03.458 00:06:03.458 real 0m33.424s 00:06:03.458 user 0m31.047s 00:06:03.458 sys 0m2.119s 00:06:03.458 21:11:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:03.458 21:11:18 -- common/autotest_common.sh@10 -- # set +x 00:06:03.458 ************************************ 00:06:03.458 END TEST json_config 00:06:03.458 ************************************ 00:06:03.458 21:11:18 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:03.458 21:11:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.458 21:11:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.458 21:11:18 -- common/autotest_common.sh@10 -- # set +x 00:06:03.720 ************************************ 00:06:03.720 START TEST json_config_extra_key 00:06:03.720 ************************************ 00:06:03.720 21:11:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:03.720 21:11:18 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.720 21:11:18 -- nvmf/common.sh@7 -- # uname -s 00:06:03.720 21:11:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.720 21:11:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.720 21:11:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.720 21:11:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.720 21:11:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.720 21:11:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.720 21:11:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.720 21:11:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.720 21:11:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.720 21:11:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.720 21:11:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:06:03.720 21:11:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:06:03.720 21:11:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.720 21:11:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.720 21:11:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.720 21:11:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.720 21:11:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:06:03.720 21:11:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.720 21:11:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.720 21:11:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.720 21:11:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.720 21:11:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.720 21:11:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.720 21:11:18 -- paths/export.sh@5 -- # export PATH 00:06:03.720 21:11:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.720 21:11:18 -- nvmf/common.sh@47 -- # : 0 00:06:03.720 21:11:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:03.720 21:11:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:03.720 21:11:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.720 21:11:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.720 21:11:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.720 21:11:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:03.720 21:11:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:03.720 21:11:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:03.720 21:11:18 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/common.sh 00:06:03.720 21:11:18 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:03.720 21:11:18 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:03.720 21:11:18 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:03.720 21:11:18 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:03.720 21:11:18 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:03.720 21:11:18 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:03.720 21:11:18 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:03.720 21:11:18 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:03.720 21:11:18 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:03.720 21:11:18 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:03.720 INFO: launching applications... 00:06:03.720 21:11:18 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:06:03.720 21:11:18 -- json_config/common.sh@9 -- # local app=target 00:06:03.720 21:11:18 -- json_config/common.sh@10 -- # shift 00:06:03.720 21:11:18 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.720 21:11:18 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.720 21:11:18 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.720 21:11:18 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.720 21:11:18 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.720 21:11:18 -- json_config/common.sh@22 -- # app_pid["$app"]=1018354 00:06:03.720 21:11:18 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.720 Waiting for target to run... 00:06:03.720 21:11:18 -- json_config/common.sh@25 -- # waitforlisten 1018354 /var/tmp/spdk_tgt.sock 00:06:03.720 21:11:18 -- common/autotest_common.sh@817 -- # '[' -z 1018354 ']' 00:06:03.720 21:11:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.720 21:11:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:03.720 21:11:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.720 21:11:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:03.720 21:11:18 -- common/autotest_common.sh@10 -- # set +x 00:06:03.720 21:11:18 -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:06:03.720 [2024-04-24 21:11:18.636672] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:03.720 [2024-04-24 21:11:18.636812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018354 ] 00:06:03.981 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.242 [2024-04-24 21:11:19.183071] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.503 [2024-04-24 21:11:19.272599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.075 21:11:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:05.075 21:11:19 -- common/autotest_common.sh@850 -- # return 0 00:06:05.075 21:11:19 -- json_config/common.sh@26 -- # echo '' 00:06:05.075 00:06:05.075 21:11:19 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:05.075 INFO: shutting down applications... 00:06:05.075 21:11:19 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:05.075 21:11:19 -- json_config/common.sh@31 -- # local app=target 00:06:05.075 21:11:19 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:05.075 21:11:19 -- json_config/common.sh@35 -- # [[ -n 1018354 ]] 00:06:05.075 21:11:19 -- json_config/common.sh@38 -- # kill -SIGINT 1018354 00:06:05.075 21:11:19 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:05.075 21:11:19 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.075 21:11:19 -- json_config/common.sh@41 -- # kill -0 1018354 00:06:05.075 21:11:19 -- json_config/common.sh@45 -- # sleep 0.5 00:06:05.644 21:11:20 -- json_config/common.sh@40 -- # (( i++ )) 00:06:05.644 21:11:20 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.644 21:11:20 -- json_config/common.sh@41 -- # kill -0 1018354 00:06:05.644 21:11:20 -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.215 21:11:20 -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.216 21:11:20 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.216 21:11:20 -- json_config/common.sh@41 -- # kill -0 1018354 00:06:06.216 21:11:20 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:06.216 21:11:20 -- json_config/common.sh@43 -- # break 00:06:06.216 21:11:20 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:06.216 21:11:20 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:06.216 SPDK target shutdown done 00:06:06.216 21:11:20 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:06.216 Success 00:06:06.216 00:06:06.216 real 0m2.513s 00:06:06.216 user 0m2.127s 00:06:06.216 sys 0m0.725s 00:06:06.216 21:11:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.216 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.216 ************************************ 00:06:06.216 END TEST json_config_extra_key 00:06:06.216 ************************************ 00:06:06.216 21:11:20 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:06.216 21:11:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.216 21:11:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.216 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.216 ************************************ 00:06:06.216 START TEST alias_rpc 00:06:06.216 ************************************ 00:06:06.216 21:11:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:06.216 * Looking for test storage... 00:06:06.216 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc 00:06:06.216 21:11:21 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:06.216 21:11:21 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1019000 00:06:06.216 21:11:21 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1019000 00:06:06.216 21:11:21 -- common/autotest_common.sh@817 -- # '[' -z 1019000 ']' 00:06:06.216 21:11:21 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.216 21:11:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.216 21:11:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:06.216 21:11:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.216 21:11:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:06.216 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:06.475 [2024-04-24 21:11:21.252715] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:06.475 [2024-04-24 21:11:21.252858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019000 ] 00:06:06.475 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.475 [2024-04-24 21:11:21.386120] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.736 [2024-04-24 21:11:21.483079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.308 21:11:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:07.308 21:11:22 -- common/autotest_common.sh@850 -- # return 0 00:06:07.308 21:11:22 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:07.308 21:11:22 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1019000 00:06:07.308 21:11:22 -- common/autotest_common.sh@936 -- # '[' -z 1019000 ']' 00:06:07.308 21:11:22 -- common/autotest_common.sh@940 -- # kill -0 1019000 00:06:07.308 21:11:22 -- common/autotest_common.sh@941 -- # uname 00:06:07.308 21:11:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.308 21:11:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1019000 00:06:07.308 21:11:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.308 21:11:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.308 21:11:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1019000' 00:06:07.308 killing process with pid 1019000 00:06:07.308 21:11:22 -- common/autotest_common.sh@955 -- # kill 1019000 00:06:07.308 21:11:22 -- common/autotest_common.sh@960 -- # wait 1019000 00:06:08.250 00:06:08.250 real 0m1.997s 00:06:08.250 user 0m2.052s 00:06:08.250 sys 0m0.485s 00:06:08.250 21:11:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.250 21:11:23 -- common/autotest_common.sh@10 -- # set +x 00:06:08.250 ************************************ 00:06:08.250 END TEST alias_rpc 00:06:08.250 ************************************ 00:06:08.250 21:11:23 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:08.250 21:11:23 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:08.250 21:11:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.250 21:11:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.250 21:11:23 -- common/autotest_common.sh@10 -- # set +x 00:06:08.250 ************************************ 00:06:08.250 START TEST spdkcli_tcp 00:06:08.250 ************************************ 00:06:08.250 21:11:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:08.512 * Looking for test storage... 00:06:08.512 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:06:08.512 21:11:23 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:06:08.512 21:11:23 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:08.512 21:11:23 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:06:08.512 21:11:23 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:08.512 21:11:23 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:08.512 21:11:23 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:08.512 21:11:23 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:08.512 21:11:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:08.512 21:11:23 -- common/autotest_common.sh@10 -- # set +x 00:06:08.512 21:11:23 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1019444 00:06:08.512 21:11:23 -- spdkcli/tcp.sh@27 -- # waitforlisten 1019444 00:06:08.512 21:11:23 -- common/autotest_common.sh@817 -- # '[' -z 1019444 ']' 00:06:08.512 21:11:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.512 21:11:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:08.512 21:11:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.512 21:11:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:08.512 21:11:23 -- common/autotest_common.sh@10 -- # set +x 00:06:08.512 21:11:23 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:08.512 [2024-04-24 21:11:23.368042] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:08.512 [2024-04-24 21:11:23.368185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019444 ] 00:06:08.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.773 [2024-04-24 21:11:23.508923] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.773 [2024-04-24 21:11:23.607698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.773 [2024-04-24 21:11:23.607699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.344 21:11:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:09.344 21:11:24 -- common/autotest_common.sh@850 -- # return 0 00:06:09.344 21:11:24 -- spdkcli/tcp.sh@31 -- # socat_pid=1019668 00:06:09.344 21:11:24 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:09.344 21:11:24 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:09.344 [ 00:06:09.344 "bdev_malloc_delete", 00:06:09.344 "bdev_malloc_create", 00:06:09.344 "bdev_null_resize", 00:06:09.344 "bdev_null_delete", 00:06:09.344 "bdev_null_create", 00:06:09.344 "bdev_nvme_cuse_unregister", 00:06:09.344 "bdev_nvme_cuse_register", 00:06:09.344 "bdev_opal_new_user", 00:06:09.344 "bdev_opal_set_lock_state", 00:06:09.344 "bdev_opal_delete", 00:06:09.344 "bdev_opal_get_info", 00:06:09.344 "bdev_opal_create", 00:06:09.344 "bdev_nvme_opal_revert", 00:06:09.344 "bdev_nvme_opal_init", 00:06:09.344 "bdev_nvme_send_cmd", 00:06:09.344 "bdev_nvme_get_path_iostat", 00:06:09.344 "bdev_nvme_get_mdns_discovery_info", 00:06:09.344 "bdev_nvme_stop_mdns_discovery", 00:06:09.344 "bdev_nvme_start_mdns_discovery", 00:06:09.344 "bdev_nvme_set_multipath_policy", 00:06:09.344 "bdev_nvme_set_preferred_path", 00:06:09.344 "bdev_nvme_get_io_paths", 00:06:09.344 "bdev_nvme_remove_error_injection", 00:06:09.344 "bdev_nvme_add_error_injection", 00:06:09.344 "bdev_nvme_get_discovery_info", 00:06:09.344 "bdev_nvme_stop_discovery", 00:06:09.344 "bdev_nvme_start_discovery", 00:06:09.344 "bdev_nvme_get_controller_health_info", 00:06:09.344 "bdev_nvme_disable_controller", 00:06:09.344 "bdev_nvme_enable_controller", 00:06:09.344 "bdev_nvme_reset_controller", 00:06:09.344 "bdev_nvme_get_transport_statistics", 00:06:09.344 "bdev_nvme_apply_firmware", 00:06:09.344 "bdev_nvme_detach_controller", 00:06:09.344 "bdev_nvme_get_controllers", 00:06:09.344 "bdev_nvme_attach_controller", 00:06:09.344 "bdev_nvme_set_hotplug", 00:06:09.344 "bdev_nvme_set_options", 00:06:09.344 "bdev_passthru_delete", 00:06:09.344 "bdev_passthru_create", 00:06:09.344 "bdev_lvol_grow_lvstore", 00:06:09.344 "bdev_lvol_get_lvols", 00:06:09.344 "bdev_lvol_get_lvstores", 00:06:09.344 "bdev_lvol_delete", 00:06:09.344 "bdev_lvol_set_read_only", 00:06:09.344 "bdev_lvol_resize", 00:06:09.344 "bdev_lvol_decouple_parent", 00:06:09.344 "bdev_lvol_inflate", 00:06:09.344 "bdev_lvol_rename", 00:06:09.344 "bdev_lvol_clone_bdev", 00:06:09.344 "bdev_lvol_clone", 00:06:09.344 "bdev_lvol_snapshot", 00:06:09.344 "bdev_lvol_create", 00:06:09.344 "bdev_lvol_delete_lvstore", 00:06:09.344 "bdev_lvol_rename_lvstore", 00:06:09.344 "bdev_lvol_create_lvstore", 00:06:09.344 "bdev_raid_set_options", 00:06:09.344 "bdev_raid_remove_base_bdev", 00:06:09.344 "bdev_raid_add_base_bdev", 00:06:09.344 "bdev_raid_delete", 00:06:09.344 "bdev_raid_create", 00:06:09.344 "bdev_raid_get_bdevs", 00:06:09.344 "bdev_error_inject_error", 00:06:09.344 "bdev_error_delete", 00:06:09.344 "bdev_error_create", 00:06:09.344 "bdev_split_delete", 00:06:09.344 "bdev_split_create", 00:06:09.344 "bdev_delay_delete", 00:06:09.344 "bdev_delay_create", 00:06:09.344 "bdev_delay_update_latency", 00:06:09.344 "bdev_zone_block_delete", 00:06:09.345 "bdev_zone_block_create", 00:06:09.345 "blobfs_create", 00:06:09.345 "blobfs_detect", 00:06:09.345 "blobfs_set_cache_size", 00:06:09.345 "bdev_aio_delete", 00:06:09.345 "bdev_aio_rescan", 00:06:09.345 "bdev_aio_create", 00:06:09.345 "bdev_ftl_set_property", 00:06:09.345 "bdev_ftl_get_properties", 00:06:09.345 "bdev_ftl_get_stats", 00:06:09.345 "bdev_ftl_unmap", 00:06:09.345 "bdev_ftl_unload", 00:06:09.345 "bdev_ftl_delete", 00:06:09.345 "bdev_ftl_load", 00:06:09.345 "bdev_ftl_create", 00:06:09.345 "bdev_virtio_attach_controller", 00:06:09.345 "bdev_virtio_scsi_get_devices", 00:06:09.345 "bdev_virtio_detach_controller", 00:06:09.345 "bdev_virtio_blk_set_hotplug", 00:06:09.345 "bdev_iscsi_delete", 00:06:09.345 "bdev_iscsi_create", 00:06:09.345 "bdev_iscsi_set_options", 00:06:09.345 "accel_error_inject_error", 00:06:09.345 "ioat_scan_accel_module", 00:06:09.345 "dsa_scan_accel_module", 00:06:09.345 "iaa_scan_accel_module", 00:06:09.345 "keyring_file_remove_key", 00:06:09.345 "keyring_file_add_key", 00:06:09.345 "iscsi_set_options", 00:06:09.345 "iscsi_get_auth_groups", 00:06:09.345 "iscsi_auth_group_remove_secret", 00:06:09.345 "iscsi_auth_group_add_secret", 00:06:09.345 "iscsi_delete_auth_group", 00:06:09.345 "iscsi_create_auth_group", 00:06:09.345 "iscsi_set_discovery_auth", 00:06:09.345 "iscsi_get_options", 00:06:09.345 "iscsi_target_node_request_logout", 00:06:09.345 "iscsi_target_node_set_redirect", 00:06:09.345 "iscsi_target_node_set_auth", 00:06:09.345 "iscsi_target_node_add_lun", 00:06:09.345 "iscsi_get_stats", 00:06:09.345 "iscsi_get_connections", 00:06:09.345 "iscsi_portal_group_set_auth", 00:06:09.345 "iscsi_start_portal_group", 00:06:09.345 "iscsi_delete_portal_group", 00:06:09.345 "iscsi_create_portal_group", 00:06:09.345 "iscsi_get_portal_groups", 00:06:09.345 "iscsi_delete_target_node", 00:06:09.345 "iscsi_target_node_remove_pg_ig_maps", 00:06:09.345 "iscsi_target_node_add_pg_ig_maps", 00:06:09.345 "iscsi_create_target_node", 00:06:09.345 "iscsi_get_target_nodes", 00:06:09.345 "iscsi_delete_initiator_group", 00:06:09.345 "iscsi_initiator_group_remove_initiators", 00:06:09.345 "iscsi_initiator_group_add_initiators", 00:06:09.345 "iscsi_create_initiator_group", 00:06:09.345 "iscsi_get_initiator_groups", 00:06:09.345 "nvmf_set_crdt", 00:06:09.345 "nvmf_set_config", 00:06:09.345 "nvmf_set_max_subsystems", 00:06:09.345 "nvmf_subsystem_get_listeners", 00:06:09.345 "nvmf_subsystem_get_qpairs", 00:06:09.345 "nvmf_subsystem_get_controllers", 00:06:09.345 "nvmf_get_stats", 00:06:09.345 "nvmf_get_transports", 00:06:09.345 "nvmf_create_transport", 00:06:09.345 "nvmf_get_targets", 00:06:09.345 "nvmf_delete_target", 00:06:09.345 "nvmf_create_target", 00:06:09.345 "nvmf_subsystem_allow_any_host", 00:06:09.345 "nvmf_subsystem_remove_host", 00:06:09.345 "nvmf_subsystem_add_host", 00:06:09.345 "nvmf_ns_remove_host", 00:06:09.345 "nvmf_ns_add_host", 00:06:09.345 "nvmf_subsystem_remove_ns", 00:06:09.345 "nvmf_subsystem_add_ns", 00:06:09.345 "nvmf_subsystem_listener_set_ana_state", 00:06:09.345 "nvmf_discovery_get_referrals", 00:06:09.345 "nvmf_discovery_remove_referral", 00:06:09.345 "nvmf_discovery_add_referral", 00:06:09.345 "nvmf_subsystem_remove_listener", 00:06:09.345 "nvmf_subsystem_add_listener", 00:06:09.345 "nvmf_delete_subsystem", 00:06:09.345 "nvmf_create_subsystem", 00:06:09.345 "nvmf_get_subsystems", 00:06:09.345 "env_dpdk_get_mem_stats", 00:06:09.345 "nbd_get_disks", 00:06:09.345 "nbd_stop_disk", 00:06:09.345 "nbd_start_disk", 00:06:09.345 "ublk_recover_disk", 00:06:09.345 "ublk_get_disks", 00:06:09.345 "ublk_stop_disk", 00:06:09.345 "ublk_start_disk", 00:06:09.345 "ublk_destroy_target", 00:06:09.345 "ublk_create_target", 00:06:09.345 "virtio_blk_create_transport", 00:06:09.345 "virtio_blk_get_transports", 00:06:09.345 "vhost_controller_set_coalescing", 00:06:09.345 "vhost_get_controllers", 00:06:09.345 "vhost_delete_controller", 00:06:09.345 "vhost_create_blk_controller", 00:06:09.345 "vhost_scsi_controller_remove_target", 00:06:09.345 "vhost_scsi_controller_add_target", 00:06:09.345 "vhost_start_scsi_controller", 00:06:09.345 "vhost_create_scsi_controller", 00:06:09.345 "thread_set_cpumask", 00:06:09.345 "framework_get_scheduler", 00:06:09.345 "framework_set_scheduler", 00:06:09.345 "framework_get_reactors", 00:06:09.345 "thread_get_io_channels", 00:06:09.345 "thread_get_pollers", 00:06:09.345 "thread_get_stats", 00:06:09.345 "framework_monitor_context_switch", 00:06:09.345 "spdk_kill_instance", 00:06:09.345 "log_enable_timestamps", 00:06:09.345 "log_get_flags", 00:06:09.345 "log_clear_flag", 00:06:09.345 "log_set_flag", 00:06:09.345 "log_get_level", 00:06:09.345 "log_set_level", 00:06:09.345 "log_get_print_level", 00:06:09.345 "log_set_print_level", 00:06:09.345 "framework_enable_cpumask_locks", 00:06:09.345 "framework_disable_cpumask_locks", 00:06:09.345 "framework_wait_init", 00:06:09.345 "framework_start_init", 00:06:09.345 "scsi_get_devices", 00:06:09.345 "bdev_get_histogram", 00:06:09.345 "bdev_enable_histogram", 00:06:09.345 "bdev_set_qos_limit", 00:06:09.345 "bdev_set_qd_sampling_period", 00:06:09.345 "bdev_get_bdevs", 00:06:09.345 "bdev_reset_iostat", 00:06:09.345 "bdev_get_iostat", 00:06:09.345 "bdev_examine", 00:06:09.345 "bdev_wait_for_examine", 00:06:09.345 "bdev_set_options", 00:06:09.345 "notify_get_notifications", 00:06:09.345 "notify_get_types", 00:06:09.345 "accel_get_stats", 00:06:09.345 "accel_set_options", 00:06:09.345 "accel_set_driver", 00:06:09.345 "accel_crypto_key_destroy", 00:06:09.345 "accel_crypto_keys_get", 00:06:09.345 "accel_crypto_key_create", 00:06:09.345 "accel_assign_opc", 00:06:09.345 "accel_get_module_info", 00:06:09.345 "accel_get_opc_assignments", 00:06:09.345 "vmd_rescan", 00:06:09.345 "vmd_remove_device", 00:06:09.345 "vmd_enable", 00:06:09.345 "sock_set_default_impl", 00:06:09.345 "sock_impl_set_options", 00:06:09.345 "sock_impl_get_options", 00:06:09.345 "iobuf_get_stats", 00:06:09.345 "iobuf_set_options", 00:06:09.345 "framework_get_pci_devices", 00:06:09.345 "framework_get_config", 00:06:09.345 "framework_get_subsystems", 00:06:09.345 "trace_get_info", 00:06:09.345 "trace_get_tpoint_group_mask", 00:06:09.345 "trace_disable_tpoint_group", 00:06:09.345 "trace_enable_tpoint_group", 00:06:09.345 "trace_clear_tpoint_mask", 00:06:09.345 "trace_set_tpoint_mask", 00:06:09.345 "keyring_get_keys", 00:06:09.345 "spdk_get_version", 00:06:09.345 "rpc_get_methods" 00:06:09.345 ] 00:06:09.345 21:11:24 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:09.345 21:11:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:09.345 21:11:24 -- common/autotest_common.sh@10 -- # set +x 00:06:09.345 21:11:24 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:09.345 21:11:24 -- spdkcli/tcp.sh@38 -- # killprocess 1019444 00:06:09.345 21:11:24 -- common/autotest_common.sh@936 -- # '[' -z 1019444 ']' 00:06:09.345 21:11:24 -- common/autotest_common.sh@940 -- # kill -0 1019444 00:06:09.345 21:11:24 -- common/autotest_common.sh@941 -- # uname 00:06:09.345 21:11:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.345 21:11:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1019444 00:06:09.345 21:11:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.345 21:11:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.345 21:11:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1019444' 00:06:09.345 killing process with pid 1019444 00:06:09.345 21:11:24 -- common/autotest_common.sh@955 -- # kill 1019444 00:06:09.345 21:11:24 -- common/autotest_common.sh@960 -- # wait 1019444 00:06:10.314 00:06:10.314 real 0m1.959s 00:06:10.314 user 0m3.274s 00:06:10.314 sys 0m0.538s 00:06:10.314 21:11:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.314 21:11:25 -- common/autotest_common.sh@10 -- # set +x 00:06:10.314 ************************************ 00:06:10.314 END TEST spdkcli_tcp 00:06:10.314 ************************************ 00:06:10.314 21:11:25 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:10.314 21:11:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.314 21:11:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.314 21:11:25 -- common/autotest_common.sh@10 -- # set +x 00:06:10.612 ************************************ 00:06:10.612 START TEST dpdk_mem_utility 00:06:10.612 ************************************ 00:06:10.612 21:11:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:10.612 * Looking for test storage... 00:06:10.612 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility 00:06:10.612 21:11:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:10.612 21:11:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1020033 00:06:10.612 21:11:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1020033 00:06:10.612 21:11:25 -- common/autotest_common.sh@817 -- # '[' -z 1020033 ']' 00:06:10.612 21:11:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.612 21:11:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.612 21:11:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:10.612 21:11:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.612 21:11:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:10.612 21:11:25 -- common/autotest_common.sh@10 -- # set +x 00:06:10.612 [2024-04-24 21:11:25.460278] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:10.612 [2024-04-24 21:11:25.460416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020033 ] 00:06:10.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.893 [2024-04-24 21:11:25.594628] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.893 [2024-04-24 21:11:25.689852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.465 21:11:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:11.465 21:11:26 -- common/autotest_common.sh@850 -- # return 0 00:06:11.465 21:11:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:11.465 21:11:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:11.465 21:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:11.465 21:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:11.465 { 00:06:11.465 "filename": "/tmp/spdk_mem_dump.txt" 00:06:11.465 } 00:06:11.465 21:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:11.465 21:11:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:11.465 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:11.465 1 heaps totaling size 820.000000 MiB 00:06:11.465 size: 820.000000 MiB heap id: 0 00:06:11.465 end heaps---------- 00:06:11.465 8 mempools totaling size 598.116089 MiB 00:06:11.465 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:11.465 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:11.465 size: 84.521057 MiB name: bdev_io_1020033 00:06:11.465 size: 51.011292 MiB name: evtpool_1020033 00:06:11.465 size: 50.003479 MiB name: msgpool_1020033 00:06:11.465 size: 21.763794 MiB name: PDU_Pool 00:06:11.465 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:11.465 size: 0.026123 MiB name: Session_Pool 00:06:11.465 end mempools------- 00:06:11.465 6 memzones totaling size 4.142822 MiB 00:06:11.465 size: 1.000366 MiB name: RG_ring_0_1020033 00:06:11.466 size: 1.000366 MiB name: RG_ring_1_1020033 00:06:11.466 size: 1.000366 MiB name: RG_ring_4_1020033 00:06:11.466 size: 1.000366 MiB name: RG_ring_5_1020033 00:06:11.466 size: 0.125366 MiB name: RG_ring_2_1020033 00:06:11.466 size: 0.015991 MiB name: RG_ring_3_1020033 00:06:11.466 end memzones------- 00:06:11.466 21:11:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:11.466 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:06:11.466 list of free elements. size: 18.514832 MiB 00:06:11.466 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:11.466 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:11.466 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:11.466 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:11.466 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:11.466 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:11.466 element at address: 0x200019600000 with size: 0.999329 MiB 00:06:11.466 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:11.466 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:11.466 element at address: 0x200018e00000 with size: 0.959900 MiB 00:06:11.466 element at address: 0x200019900040 with size: 0.937256 MiB 00:06:11.466 element at address: 0x200000200000 with size: 0.840942 MiB 00:06:11.466 element at address: 0x20001b000000 with size: 0.583191 MiB 00:06:11.466 element at address: 0x200019200000 with size: 0.491150 MiB 00:06:11.466 element at address: 0x200019a00000 with size: 0.485657 MiB 00:06:11.466 element at address: 0x200013800000 with size: 0.470581 MiB 00:06:11.466 element at address: 0x200028400000 with size: 0.411072 MiB 00:06:11.466 element at address: 0x200003a00000 with size: 0.356140 MiB 00:06:11.466 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:06:11.466 list of standard malloc elements. size: 199.220764 MiB 00:06:11.466 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:11.466 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:11.466 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:11.466 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:11.466 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:11.466 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:11.466 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:11.466 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:11.466 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:06:11.466 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:06:11.466 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:11.466 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:11.466 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:11.466 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:11.466 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:06:11.466 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:06:11.466 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:06:11.466 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:06:11.466 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:06:11.466 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:06:11.466 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:11.466 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:11.466 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:11.466 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:11.466 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:11.466 list of memzone associated elements. size: 602.264404 MiB 00:06:11.466 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:11.466 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:11.466 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:11.466 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:11.466 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:11.466 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1020033_0 00:06:11.466 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:11.466 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1020033_0 00:06:11.466 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:11.466 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1020033_0 00:06:11.466 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:11.466 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:11.466 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:11.466 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:11.466 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:11.466 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1020033 00:06:11.466 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:11.466 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1020033 00:06:11.466 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:11.466 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1020033 00:06:11.466 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:11.466 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:11.466 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:11.466 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:11.466 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:11.466 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:11.466 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:11.466 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:11.466 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:11.466 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1020033 00:06:11.466 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:11.466 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1020033 00:06:11.466 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:11.466 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1020033 00:06:11.466 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:11.466 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1020033 00:06:11.466 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:11.466 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1020033 00:06:11.466 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:06:11.466 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:11.466 element at address: 0x200013878780 with size: 0.500549 MiB 00:06:11.466 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:11.466 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:06:11.466 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:11.466 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:11.466 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1020033 00:06:11.466 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:06:11.466 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:11.466 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:06:11.466 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:11.466 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:11.466 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1020033 00:06:11.466 element at address: 0x20002846f540 with size: 0.002502 MiB 00:06:11.466 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:11.466 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:06:11.466 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1020033 00:06:11.466 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:11.466 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1020033 00:06:11.466 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:06:11.466 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:11.466 21:11:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:11.466 21:11:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1020033 00:06:11.466 21:11:26 -- common/autotest_common.sh@936 -- # '[' -z 1020033 ']' 00:06:11.466 21:11:26 -- common/autotest_common.sh@940 -- # kill -0 1020033 00:06:11.466 21:11:26 -- common/autotest_common.sh@941 -- # uname 00:06:11.466 21:11:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.466 21:11:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1020033 00:06:11.466 21:11:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:11.466 21:11:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:11.466 21:11:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1020033' 00:06:11.466 killing process with pid 1020033 00:06:11.466 21:11:26 -- common/autotest_common.sh@955 -- # kill 1020033 00:06:11.466 21:11:26 -- common/autotest_common.sh@960 -- # wait 1020033 00:06:12.408 00:06:12.408 real 0m1.827s 00:06:12.408 user 0m1.735s 00:06:12.408 sys 0m0.478s 00:06:12.408 21:11:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.408 21:11:27 -- common/autotest_common.sh@10 -- # set +x 00:06:12.408 ************************************ 00:06:12.408 END TEST dpdk_mem_utility 00:06:12.408 ************************************ 00:06:12.408 21:11:27 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:06:12.408 21:11:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.408 21:11:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.408 21:11:27 -- common/autotest_common.sh@10 -- # set +x 00:06:12.408 ************************************ 00:06:12.408 START TEST event 00:06:12.408 ************************************ 00:06:12.408 21:11:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:06:12.408 * Looking for test storage... 00:06:12.408 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:06:12.408 21:11:27 -- event/event.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:12.408 21:11:27 -- bdev/nbd_common.sh@6 -- # set -e 00:06:12.408 21:11:27 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.408 21:11:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:12.408 21:11:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.408 21:11:27 -- common/autotest_common.sh@10 -- # set +x 00:06:12.408 ************************************ 00:06:12.408 START TEST event_perf 00:06:12.408 ************************************ 00:06:12.408 21:11:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.668 Running I/O for 1 seconds...[2024-04-24 21:11:27.401781] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:12.668 [2024-04-24 21:11:27.401886] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020417 ] 00:06:12.668 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.668 [2024-04-24 21:11:27.517515] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.669 [2024-04-24 21:11:27.617008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.669 [2024-04-24 21:11:27.617145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.669 [2024-04-24 21:11:27.617400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.669 [2024-04-24 21:11:27.617403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.053 Running I/O for 1 seconds... 00:06:14.053 lcore 0: 151857 00:06:14.053 lcore 1: 151856 00:06:14.053 lcore 2: 151858 00:06:14.053 lcore 3: 151854 00:06:14.053 done. 00:06:14.053 00:06:14.053 real 0m1.399s 00:06:14.053 user 0m4.240s 00:06:14.053 sys 0m0.142s 00:06:14.053 21:11:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.053 21:11:28 -- common/autotest_common.sh@10 -- # set +x 00:06:14.053 ************************************ 00:06:14.053 END TEST event_perf 00:06:14.053 ************************************ 00:06:14.053 21:11:28 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:14.053 21:11:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:14.053 21:11:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.053 21:11:28 -- common/autotest_common.sh@10 -- # set +x 00:06:14.053 ************************************ 00:06:14.053 START TEST event_reactor 00:06:14.053 ************************************ 00:06:14.053 21:11:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:14.053 [2024-04-24 21:11:28.904849] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:14.053 [2024-04-24 21:11:28.904949] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020748 ] 00:06:14.053 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.053 [2024-04-24 21:11:29.017833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.314 [2024-04-24 21:11:29.107813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.710 test_start 00:06:15.710 oneshot 00:06:15.710 tick 100 00:06:15.710 tick 100 00:06:15.710 tick 250 00:06:15.710 tick 100 00:06:15.710 tick 100 00:06:15.710 tick 100 00:06:15.710 tick 250 00:06:15.710 tick 500 00:06:15.710 tick 100 00:06:15.710 tick 100 00:06:15.710 tick 250 00:06:15.710 tick 100 00:06:15.710 tick 100 00:06:15.710 test_end 00:06:15.710 00:06:15.710 real 0m1.380s 00:06:15.710 user 0m1.241s 00:06:15.710 sys 0m0.133s 00:06:15.710 21:11:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.710 21:11:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.710 ************************************ 00:06:15.710 END TEST event_reactor 00:06:15.710 ************************************ 00:06:15.710 21:11:30 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.710 21:11:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:15.710 21:11:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.710 21:11:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.710 ************************************ 00:06:15.710 START TEST event_reactor_perf 00:06:15.710 ************************************ 00:06:15.710 21:11:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.710 [2024-04-24 21:11:30.396682] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:15.710 [2024-04-24 21:11:30.396790] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021073 ] 00:06:15.710 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.710 [2024-04-24 21:11:30.511022] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.710 [2024-04-24 21:11:30.601574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.093 test_start 00:06:17.093 test_end 00:06:17.093 Performance: 424599 events per second 00:06:17.094 00:06:17.094 real 0m1.385s 00:06:17.094 user 0m1.255s 00:06:17.094 sys 0m0.123s 00:06:17.094 21:11:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.094 21:11:31 -- common/autotest_common.sh@10 -- # set +x 00:06:17.094 ************************************ 00:06:17.094 END TEST event_reactor_perf 00:06:17.094 ************************************ 00:06:17.094 21:11:31 -- event/event.sh@49 -- # uname -s 00:06:17.094 21:11:31 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:17.094 21:11:31 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:17.094 21:11:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.094 21:11:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.094 21:11:31 -- common/autotest_common.sh@10 -- # set +x 00:06:17.094 ************************************ 00:06:17.094 START TEST event_scheduler 00:06:17.094 ************************************ 00:06:17.094 21:11:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:17.094 * Looking for test storage... 00:06:17.094 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler 00:06:17.094 21:11:31 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:17.094 21:11:31 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1021429 00:06:17.094 21:11:31 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.094 21:11:31 -- scheduler/scheduler.sh@37 -- # waitforlisten 1021429 00:06:17.094 21:11:31 -- common/autotest_common.sh@817 -- # '[' -z 1021429 ']' 00:06:17.094 21:11:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.094 21:11:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:17.094 21:11:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.094 21:11:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:17.094 21:11:31 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:17.094 21:11:31 -- common/autotest_common.sh@10 -- # set +x 00:06:17.094 [2024-04-24 21:11:32.009318] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:17.094 [2024-04-24 21:11:32.009468] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021429 ] 00:06:17.355 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.355 [2024-04-24 21:11:32.143614] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.355 [2024-04-24 21:11:32.243074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.355 [2024-04-24 21:11:32.243199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.355 [2024-04-24 21:11:32.243311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.355 [2024-04-24 21:11:32.243321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.926 21:11:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:17.926 21:11:32 -- common/autotest_common.sh@850 -- # return 0 00:06:17.926 21:11:32 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:17.926 21:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.926 21:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:17.926 POWER: Env isn't set yet! 00:06:17.926 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:17.926 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.926 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.926 POWER: Attempting to initialise PSTAT power management... 00:06:17.926 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:17.926 POWER: Initialized successfully for lcore 0 power management 00:06:17.926 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:17.926 POWER: Initialized successfully for lcore 1 power management 00:06:17.926 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:17.926 POWER: Initialized successfully for lcore 2 power management 00:06:17.926 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:17.926 POWER: Initialized successfully for lcore 3 power management 00:06:17.926 21:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:17.926 21:11:32 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:17.926 21:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:17.926 21:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:18.186 [2024-04-24 21:11:33.024386] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:18.186 21:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.186 21:11:33 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:18.186 21:11:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:18.186 21:11:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.186 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.186 ************************************ 00:06:18.186 START TEST scheduler_create_thread 00:06:18.186 ************************************ 00:06:18.186 21:11:33 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:06:18.186 21:11:33 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:18.186 21:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.186 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.186 2 00:06:18.186 21:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.186 21:11:33 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:18.186 21:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.186 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.186 3 00:06:18.186 21:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.186 21:11:33 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:18.186 21:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.186 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.447 4 00:06:18.447 21:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.447 21:11:33 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:18.447 21:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.447 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.447 5 00:06:18.447 21:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.447 21:11:33 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:18.447 21:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.447 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.447 6 00:06:18.447 21:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.447 21:11:33 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:18.447 21:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.447 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.447 7 00:06:18.447 21:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.447 21:11:33 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:18.447 21:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.447 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.447 8 00:06:18.447 21:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.447 21:11:33 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:18.447 21:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.447 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.447 9 00:06:18.447 21:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.447 21:11:33 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:18.447 21:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.447 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.447 10 00:06:18.447 21:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.447 21:11:33 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:18.447 21:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.447 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.447 21:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.447 21:11:33 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:18.447 21:11:33 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:18.447 21:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.447 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.447 21:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.447 21:11:33 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:18.447 21:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.447 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:18.447 21:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.447 21:11:33 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:18.447 21:11:33 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:18.447 21:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.447 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:06:19.389 21:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.389 00:06:19.389 real 0m1.171s 00:06:19.389 user 0m0.010s 00:06:19.389 sys 0m0.003s 00:06:19.389 21:11:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.389 21:11:34 -- common/autotest_common.sh@10 -- # set +x 00:06:19.389 ************************************ 00:06:19.389 END TEST scheduler_create_thread 00:06:19.389 ************************************ 00:06:19.389 21:11:34 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:19.389 21:11:34 -- scheduler/scheduler.sh@46 -- # killprocess 1021429 00:06:19.389 21:11:34 -- common/autotest_common.sh@936 -- # '[' -z 1021429 ']' 00:06:19.389 21:11:34 -- common/autotest_common.sh@940 -- # kill -0 1021429 00:06:19.389 21:11:34 -- common/autotest_common.sh@941 -- # uname 00:06:19.389 21:11:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:19.389 21:11:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1021429 00:06:19.650 21:11:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:19.650 21:11:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:19.650 21:11:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1021429' 00:06:19.650 killing process with pid 1021429 00:06:19.650 21:11:34 -- common/autotest_common.sh@955 -- # kill 1021429 00:06:19.650 21:11:34 -- common/autotest_common.sh@960 -- # wait 1021429 00:06:19.911 [2024-04-24 21:11:34.780995] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:20.171 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:20.171 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:20.171 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:20.171 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:20.171 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:20.171 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:20.171 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:20.171 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:20.432 00:06:20.432 real 0m3.385s 00:06:20.432 user 0m5.599s 00:06:20.432 sys 0m0.540s 00:06:20.432 21:11:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.432 21:11:35 -- common/autotest_common.sh@10 -- # set +x 00:06:20.432 ************************************ 00:06:20.432 END TEST event_scheduler 00:06:20.432 ************************************ 00:06:20.432 21:11:35 -- event/event.sh@51 -- # modprobe -n nbd 00:06:20.432 21:11:35 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:20.432 21:11:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.432 21:11:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.432 21:11:35 -- common/autotest_common.sh@10 -- # set +x 00:06:20.432 ************************************ 00:06:20.432 START TEST app_repeat 00:06:20.432 ************************************ 00:06:20.432 21:11:35 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:06:20.432 21:11:35 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.432 21:11:35 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.432 21:11:35 -- event/event.sh@13 -- # local nbd_list 00:06:20.432 21:11:35 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.432 21:11:35 -- event/event.sh@14 -- # local bdev_list 00:06:20.432 21:11:35 -- event/event.sh@15 -- # local repeat_times=4 00:06:20.432 21:11:35 -- event/event.sh@17 -- # modprobe nbd 00:06:20.432 21:11:35 -- event/event.sh@19 -- # repeat_pid=1022095 00:06:20.432 21:11:35 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.432 21:11:35 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1022095' 00:06:20.432 Process app_repeat pid: 1022095 00:06:20.432 21:11:35 -- event/event.sh@23 -- # for i in {0..2} 00:06:20.432 21:11:35 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:20.432 spdk_app_start Round 0 00:06:20.432 21:11:35 -- event/event.sh@25 -- # waitforlisten 1022095 /var/tmp/spdk-nbd.sock 00:06:20.432 21:11:35 -- event/event.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:20.432 21:11:35 -- common/autotest_common.sh@817 -- # '[' -z 1022095 ']' 00:06:20.432 21:11:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.432 21:11:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:20.432 21:11:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.432 21:11:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:20.432 21:11:35 -- common/autotest_common.sh@10 -- # set +x 00:06:20.718 [2024-04-24 21:11:35.411885] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:20.718 [2024-04-24 21:11:35.411989] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022095 ] 00:06:20.718 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.718 [2024-04-24 21:11:35.528433] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.718 [2024-04-24 21:11:35.624977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.718 [2024-04-24 21:11:35.624982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.290 21:11:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.290 21:11:36 -- common/autotest_common.sh@850 -- # return 0 00:06:21.290 21:11:36 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.551 Malloc0 00:06:21.551 21:11:36 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.551 Malloc1 00:06:21.551 21:11:36 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@12 -- # local i 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.551 21:11:36 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.811 /dev/nbd0 00:06:21.812 21:11:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.812 21:11:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.812 21:11:36 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:21.812 21:11:36 -- common/autotest_common.sh@855 -- # local i 00:06:21.812 21:11:36 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:21.812 21:11:36 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:21.812 21:11:36 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:21.812 21:11:36 -- common/autotest_common.sh@859 -- # break 00:06:21.812 21:11:36 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:21.812 21:11:36 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:21.812 21:11:36 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.812 1+0 records in 00:06:21.812 1+0 records out 00:06:21.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255608 s, 16.0 MB/s 00:06:21.812 21:11:36 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:21.812 21:11:36 -- common/autotest_common.sh@872 -- # size=4096 00:06:21.812 21:11:36 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:21.812 21:11:36 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:21.812 21:11:36 -- common/autotest_common.sh@875 -- # return 0 00:06:21.812 21:11:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.812 21:11:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.812 21:11:36 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.073 /dev/nbd1 00:06:22.073 21:11:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.073 21:11:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.073 21:11:36 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:22.073 21:11:36 -- common/autotest_common.sh@855 -- # local i 00:06:22.073 21:11:36 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:22.073 21:11:36 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:22.073 21:11:36 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:22.073 21:11:36 -- common/autotest_common.sh@859 -- # break 00:06:22.073 21:11:36 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:22.073 21:11:36 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:22.073 21:11:36 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.073 1+0 records in 00:06:22.073 1+0 records out 00:06:22.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206485 s, 19.8 MB/s 00:06:22.073 21:11:36 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:22.073 21:11:36 -- common/autotest_common.sh@872 -- # size=4096 00:06:22.073 21:11:36 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:22.073 21:11:36 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:22.073 21:11:36 -- common/autotest_common.sh@875 -- # return 0 00:06:22.073 21:11:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.073 21:11:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.073 21:11:36 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.073 21:11:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.073 21:11:36 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.334 { 00:06:22.334 "nbd_device": "/dev/nbd0", 00:06:22.334 "bdev_name": "Malloc0" 00:06:22.334 }, 00:06:22.334 { 00:06:22.334 "nbd_device": "/dev/nbd1", 00:06:22.334 "bdev_name": "Malloc1" 00:06:22.334 } 00:06:22.334 ]' 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.334 { 00:06:22.334 "nbd_device": "/dev/nbd0", 00:06:22.334 "bdev_name": "Malloc0" 00:06:22.334 }, 00:06:22.334 { 00:06:22.334 "nbd_device": "/dev/nbd1", 00:06:22.334 "bdev_name": "Malloc1" 00:06:22.334 } 00:06:22.334 ]' 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.334 /dev/nbd1' 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.334 /dev/nbd1' 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.334 256+0 records in 00:06:22.334 256+0 records out 00:06:22.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450009 s, 233 MB/s 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.334 256+0 records in 00:06:22.334 256+0 records out 00:06:22.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145604 s, 72.0 MB/s 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.334 256+0 records in 00:06:22.334 256+0 records out 00:06:22.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166602 s, 62.9 MB/s 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@51 -- # local i 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.334 21:11:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@41 -- # break 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@41 -- # break 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.594 21:11:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.855 21:11:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.855 21:11:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.855 21:11:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.855 21:11:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.855 21:11:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.855 21:11:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.855 21:11:37 -- bdev/nbd_common.sh@65 -- # true 00:06:22.855 21:11:37 -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.855 21:11:37 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.855 21:11:37 -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.855 21:11:37 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.855 21:11:37 -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.855 21:11:37 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.115 21:11:37 -- event/event.sh@35 -- # sleep 3 00:06:23.685 [2024-04-24 21:11:38.430248] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.686 [2024-04-24 21:11:38.515932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.686 [2024-04-24 21:11:38.515938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.686 [2024-04-24 21:11:38.596617] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.686 [2024-04-24 21:11:38.596667] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.228 21:11:40 -- event/event.sh@23 -- # for i in {0..2} 00:06:26.228 21:11:40 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:26.228 spdk_app_start Round 1 00:06:26.228 21:11:40 -- event/event.sh@25 -- # waitforlisten 1022095 /var/tmp/spdk-nbd.sock 00:06:26.228 21:11:40 -- common/autotest_common.sh@817 -- # '[' -z 1022095 ']' 00:06:26.228 21:11:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.228 21:11:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.228 21:11:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.228 21:11:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.228 21:11:40 -- common/autotest_common.sh@10 -- # set +x 00:06:26.228 21:11:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:26.228 21:11:41 -- common/autotest_common.sh@850 -- # return 0 00:06:26.228 21:11:41 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.488 Malloc0 00:06:26.488 21:11:41 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.488 Malloc1 00:06:26.749 21:11:41 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@12 -- # local i 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.749 /dev/nbd0 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.749 21:11:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:26.749 21:11:41 -- common/autotest_common.sh@855 -- # local i 00:06:26.749 21:11:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:26.749 21:11:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:26.749 21:11:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:26.749 21:11:41 -- common/autotest_common.sh@859 -- # break 00:06:26.749 21:11:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:26.749 21:11:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:26.749 21:11:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.749 1+0 records in 00:06:26.749 1+0 records out 00:06:26.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236169 s, 17.3 MB/s 00:06:26.749 21:11:41 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:26.749 21:11:41 -- common/autotest_common.sh@872 -- # size=4096 00:06:26.749 21:11:41 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:26.749 21:11:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:26.749 21:11:41 -- common/autotest_common.sh@875 -- # return 0 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.749 21:11:41 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.010 /dev/nbd1 00:06:27.010 21:11:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.010 21:11:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.010 21:11:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:27.010 21:11:41 -- common/autotest_common.sh@855 -- # local i 00:06:27.010 21:11:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:27.010 21:11:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:27.010 21:11:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:27.010 21:11:41 -- common/autotest_common.sh@859 -- # break 00:06:27.010 21:11:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:27.010 21:11:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:27.010 21:11:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.010 1+0 records in 00:06:27.010 1+0 records out 00:06:27.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311526 s, 13.1 MB/s 00:06:27.010 21:11:41 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:27.010 21:11:41 -- common/autotest_common.sh@872 -- # size=4096 00:06:27.010 21:11:41 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:27.010 21:11:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:27.010 21:11:41 -- common/autotest_common.sh@875 -- # return 0 00:06:27.010 21:11:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.010 21:11:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.010 21:11:41 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.010 21:11:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.010 21:11:41 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.272 21:11:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.272 { 00:06:27.272 "nbd_device": "/dev/nbd0", 00:06:27.272 "bdev_name": "Malloc0" 00:06:27.272 }, 00:06:27.272 { 00:06:27.272 "nbd_device": "/dev/nbd1", 00:06:27.272 "bdev_name": "Malloc1" 00:06:27.272 } 00:06:27.272 ]' 00:06:27.272 21:11:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.272 21:11:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.272 { 00:06:27.272 "nbd_device": "/dev/nbd0", 00:06:27.272 "bdev_name": "Malloc0" 00:06:27.272 }, 00:06:27.272 { 00:06:27.272 "nbd_device": "/dev/nbd1", 00:06:27.272 "bdev_name": "Malloc1" 00:06:27.272 } 00:06:27.272 ]' 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.272 /dev/nbd1' 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.272 /dev/nbd1' 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.272 256+0 records in 00:06:27.272 256+0 records out 00:06:27.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450844 s, 233 MB/s 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.272 256+0 records in 00:06:27.272 256+0 records out 00:06:27.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149618 s, 70.1 MB/s 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.272 256+0 records in 00:06:27.272 256+0 records out 00:06:27.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175275 s, 59.8 MB/s 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@51 -- # local i 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.272 21:11:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@41 -- # break 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.533 21:11:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@41 -- # break 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@65 -- # true 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.794 21:11:42 -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.794 21:11:42 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.055 21:11:42 -- event/event.sh@35 -- # sleep 3 00:06:28.623 [2024-04-24 21:11:43.375904] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.623 [2024-04-24 21:11:43.466659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.623 [2024-04-24 21:11:43.466662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.623 [2024-04-24 21:11:43.559364] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.623 [2024-04-24 21:11:43.559402] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.167 21:11:45 -- event/event.sh@23 -- # for i in {0..2} 00:06:31.167 21:11:45 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:31.167 spdk_app_start Round 2 00:06:31.167 21:11:45 -- event/event.sh@25 -- # waitforlisten 1022095 /var/tmp/spdk-nbd.sock 00:06:31.167 21:11:45 -- common/autotest_common.sh@817 -- # '[' -z 1022095 ']' 00:06:31.167 21:11:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.167 21:11:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:31.167 21:11:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.167 21:11:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:31.167 21:11:45 -- common/autotest_common.sh@10 -- # set +x 00:06:31.167 21:11:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:31.167 21:11:46 -- common/autotest_common.sh@850 -- # return 0 00:06:31.167 21:11:46 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.427 Malloc0 00:06:31.427 21:11:46 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.427 Malloc1 00:06:31.687 21:11:46 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@12 -- # local i 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.687 /dev/nbd0 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.687 21:11:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.687 21:11:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:31.687 21:11:46 -- common/autotest_common.sh@855 -- # local i 00:06:31.687 21:11:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:31.687 21:11:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:31.687 21:11:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:31.687 21:11:46 -- common/autotest_common.sh@859 -- # break 00:06:31.687 21:11:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:31.687 21:11:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:31.688 21:11:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.688 1+0 records in 00:06:31.688 1+0 records out 00:06:31.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260298 s, 15.7 MB/s 00:06:31.688 21:11:46 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:31.688 21:11:46 -- common/autotest_common.sh@872 -- # size=4096 00:06:31.688 21:11:46 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:31.688 21:11:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:31.688 21:11:46 -- common/autotest_common.sh@875 -- # return 0 00:06:31.688 21:11:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.688 21:11:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.688 21:11:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.947 /dev/nbd1 00:06:31.947 21:11:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.947 21:11:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.947 21:11:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:31.947 21:11:46 -- common/autotest_common.sh@855 -- # local i 00:06:31.947 21:11:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:31.947 21:11:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:31.947 21:11:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:31.947 21:11:46 -- common/autotest_common.sh@859 -- # break 00:06:31.947 21:11:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:31.947 21:11:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:31.947 21:11:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.947 1+0 records in 00:06:31.947 1+0 records out 00:06:31.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216029 s, 19.0 MB/s 00:06:31.947 21:11:46 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:31.947 21:11:46 -- common/autotest_common.sh@872 -- # size=4096 00:06:31.947 21:11:46 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:31.947 21:11:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:31.947 21:11:46 -- common/autotest_common.sh@875 -- # return 0 00:06:31.947 21:11:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.947 21:11:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.947 21:11:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.947 21:11:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.947 21:11:46 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.207 21:11:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.207 { 00:06:32.207 "nbd_device": "/dev/nbd0", 00:06:32.207 "bdev_name": "Malloc0" 00:06:32.207 }, 00:06:32.207 { 00:06:32.207 "nbd_device": "/dev/nbd1", 00:06:32.207 "bdev_name": "Malloc1" 00:06:32.207 } 00:06:32.207 ]' 00:06:32.207 21:11:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.207 21:11:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.207 { 00:06:32.207 "nbd_device": "/dev/nbd0", 00:06:32.207 "bdev_name": "Malloc0" 00:06:32.207 }, 00:06:32.207 { 00:06:32.207 "nbd_device": "/dev/nbd1", 00:06:32.207 "bdev_name": "Malloc1" 00:06:32.207 } 00:06:32.207 ]' 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.207 /dev/nbd1' 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.207 /dev/nbd1' 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.207 256+0 records in 00:06:32.207 256+0 records out 00:06:32.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503016 s, 208 MB/s 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.207 256+0 records in 00:06:32.207 256+0 records out 00:06:32.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148071 s, 70.8 MB/s 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.207 256+0 records in 00:06:32.207 256+0 records out 00:06:32.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016277 s, 64.4 MB/s 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@51 -- # local i 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.207 21:11:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.468 21:11:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.468 21:11:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.468 21:11:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.468 21:11:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.468 21:11:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.468 21:11:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.468 21:11:47 -- bdev/nbd_common.sh@41 -- # break 00:06:32.468 21:11:47 -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.468 21:11:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.468 21:11:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.728 21:11:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@41 -- # break 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@65 -- # true 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.729 21:11:47 -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.729 21:11:47 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.990 21:11:47 -- event/event.sh@35 -- # sleep 3 00:06:33.559 [2024-04-24 21:11:48.372563] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.559 [2024-04-24 21:11:48.467070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.559 [2024-04-24 21:11:48.467076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.820 [2024-04-24 21:11:48.551179] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.820 [2024-04-24 21:11:48.551216] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.366 21:11:50 -- event/event.sh@38 -- # waitforlisten 1022095 /var/tmp/spdk-nbd.sock 00:06:36.366 21:11:50 -- common/autotest_common.sh@817 -- # '[' -z 1022095 ']' 00:06:36.366 21:11:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.366 21:11:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:36.366 21:11:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.366 21:11:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:36.366 21:11:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.366 21:11:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:36.366 21:11:51 -- common/autotest_common.sh@850 -- # return 0 00:06:36.366 21:11:51 -- event/event.sh@39 -- # killprocess 1022095 00:06:36.366 21:11:51 -- common/autotest_common.sh@936 -- # '[' -z 1022095 ']' 00:06:36.366 21:11:51 -- common/autotest_common.sh@940 -- # kill -0 1022095 00:06:36.366 21:11:51 -- common/autotest_common.sh@941 -- # uname 00:06:36.366 21:11:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:36.366 21:11:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1022095 00:06:36.366 21:11:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:36.366 21:11:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:36.366 21:11:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1022095' 00:06:36.366 killing process with pid 1022095 00:06:36.366 21:11:51 -- common/autotest_common.sh@955 -- # kill 1022095 00:06:36.366 21:11:51 -- common/autotest_common.sh@960 -- # wait 1022095 00:06:36.626 spdk_app_start is called in Round 0. 00:06:36.626 Shutdown signal received, stop current app iteration 00:06:36.626 Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 reinitialization... 00:06:36.626 spdk_app_start is called in Round 1. 00:06:36.626 Shutdown signal received, stop current app iteration 00:06:36.626 Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 reinitialization... 00:06:36.626 spdk_app_start is called in Round 2. 00:06:36.626 Shutdown signal received, stop current app iteration 00:06:36.626 Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 reinitialization... 00:06:36.626 spdk_app_start is called in Round 3. 00:06:36.626 Shutdown signal received, stop current app iteration 00:06:36.626 21:11:51 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:36.626 21:11:51 -- event/event.sh@42 -- # return 0 00:06:36.626 00:06:36.626 real 0m16.141s 00:06:36.626 user 0m33.453s 00:06:36.626 sys 0m2.196s 00:06:36.626 21:11:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.626 21:11:51 -- common/autotest_common.sh@10 -- # set +x 00:06:36.626 ************************************ 00:06:36.626 END TEST app_repeat 00:06:36.626 ************************************ 00:06:36.626 21:11:51 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:36.626 21:11:51 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.626 21:11:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.626 21:11:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.626 21:11:51 -- common/autotest_common.sh@10 -- # set +x 00:06:36.886 ************************************ 00:06:36.886 START TEST cpu_locks 00:06:36.886 ************************************ 00:06:36.886 21:11:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.886 * Looking for test storage... 00:06:36.886 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:06:36.886 21:11:51 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:36.886 21:11:51 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:36.886 21:11:51 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:36.886 21:11:51 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:36.886 21:11:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.886 21:11:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.886 21:11:51 -- common/autotest_common.sh@10 -- # set +x 00:06:36.886 ************************************ 00:06:36.886 START TEST default_locks 00:06:36.886 ************************************ 00:06:36.886 21:11:51 -- common/autotest_common.sh@1111 -- # default_locks 00:06:36.886 21:11:51 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1025609 00:06:36.886 21:11:51 -- event/cpu_locks.sh@47 -- # waitforlisten 1025609 00:06:36.886 21:11:51 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.886 21:11:51 -- common/autotest_common.sh@817 -- # '[' -z 1025609 ']' 00:06:36.886 21:11:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.886 21:11:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:36.886 21:11:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.886 21:11:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:36.886 21:11:51 -- common/autotest_common.sh@10 -- # set +x 00:06:36.886 [2024-04-24 21:11:51.841752] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:36.886 [2024-04-24 21:11:51.841826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025609 ] 00:06:37.146 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.146 [2024-04-24 21:11:51.931472] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.146 [2024-04-24 21:11:52.027617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.719 21:11:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:37.719 21:11:52 -- common/autotest_common.sh@850 -- # return 0 00:06:37.719 21:11:52 -- event/cpu_locks.sh@49 -- # locks_exist 1025609 00:06:37.719 21:11:52 -- event/cpu_locks.sh@22 -- # lslocks -p 1025609 00:06:37.719 21:11:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.980 lslocks: write error 00:06:37.980 21:11:52 -- event/cpu_locks.sh@50 -- # killprocess 1025609 00:06:37.980 21:11:52 -- common/autotest_common.sh@936 -- # '[' -z 1025609 ']' 00:06:37.980 21:11:52 -- common/autotest_common.sh@940 -- # kill -0 1025609 00:06:37.980 21:11:52 -- common/autotest_common.sh@941 -- # uname 00:06:37.980 21:11:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.980 21:11:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1025609 00:06:37.980 21:11:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:37.980 21:11:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:37.980 21:11:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1025609' 00:06:37.980 killing process with pid 1025609 00:06:37.980 21:11:52 -- common/autotest_common.sh@955 -- # kill 1025609 00:06:37.980 21:11:52 -- common/autotest_common.sh@960 -- # wait 1025609 00:06:38.922 21:11:53 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1025609 00:06:38.922 21:11:53 -- common/autotest_common.sh@638 -- # local es=0 00:06:38.922 21:11:53 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1025609 00:06:38.922 21:11:53 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:38.922 21:11:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:38.922 21:11:53 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:38.922 21:11:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:38.922 21:11:53 -- common/autotest_common.sh@641 -- # waitforlisten 1025609 00:06:38.922 21:11:53 -- common/autotest_common.sh@817 -- # '[' -z 1025609 ']' 00:06:38.922 21:11:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.922 21:11:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:38.922 21:11:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.922 21:11:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:38.922 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:06:38.922 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1025609) - No such process 00:06:38.922 ERROR: process (pid: 1025609) is no longer running 00:06:38.922 21:11:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:38.922 21:11:53 -- common/autotest_common.sh@850 -- # return 1 00:06:38.922 21:11:53 -- common/autotest_common.sh@641 -- # es=1 00:06:38.922 21:11:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:38.922 21:11:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:38.922 21:11:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:38.922 21:11:53 -- event/cpu_locks.sh@54 -- # no_locks 00:06:38.922 21:11:53 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.922 21:11:53 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.922 21:11:53 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.922 00:06:38.922 real 0m1.877s 00:06:38.922 user 0m1.840s 00:06:38.922 sys 0m0.466s 00:06:38.922 21:11:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.922 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:06:38.922 ************************************ 00:06:38.922 END TEST default_locks 00:06:38.922 ************************************ 00:06:38.922 21:11:53 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:38.922 21:11:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:38.922 21:11:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.922 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:06:38.922 ************************************ 00:06:38.922 START TEST default_locks_via_rpc 00:06:38.922 ************************************ 00:06:38.922 21:11:53 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:06:38.922 21:11:53 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1025968 00:06:38.922 21:11:53 -- event/cpu_locks.sh@63 -- # waitforlisten 1025968 00:06:38.922 21:11:53 -- common/autotest_common.sh@817 -- # '[' -z 1025968 ']' 00:06:38.922 21:11:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.922 21:11:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:38.922 21:11:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.922 21:11:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:38.922 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:06:38.922 21:11:53 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.922 [2024-04-24 21:11:53.863544] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:38.922 [2024-04-24 21:11:53.863651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025968 ] 00:06:39.182 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.182 [2024-04-24 21:11:53.977510] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.182 [2024-04-24 21:11:54.075537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.753 21:11:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:39.753 21:11:54 -- common/autotest_common.sh@850 -- # return 0 00:06:39.753 21:11:54 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:39.753 21:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.753 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:06:39.753 21:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.753 21:11:54 -- event/cpu_locks.sh@67 -- # no_locks 00:06:39.753 21:11:54 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.753 21:11:54 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.753 21:11:54 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.753 21:11:54 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.753 21:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.753 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:06:39.753 21:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.753 21:11:54 -- event/cpu_locks.sh@71 -- # locks_exist 1025968 00:06:39.753 21:11:54 -- event/cpu_locks.sh@22 -- # lslocks -p 1025968 00:06:39.753 21:11:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.013 21:11:54 -- event/cpu_locks.sh@73 -- # killprocess 1025968 00:06:40.013 21:11:54 -- common/autotest_common.sh@936 -- # '[' -z 1025968 ']' 00:06:40.013 21:11:54 -- common/autotest_common.sh@940 -- # kill -0 1025968 00:06:40.013 21:11:54 -- common/autotest_common.sh@941 -- # uname 00:06:40.013 21:11:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:40.013 21:11:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1025968 00:06:40.013 21:11:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:40.013 21:11:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:40.013 21:11:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1025968' 00:06:40.013 killing process with pid 1025968 00:06:40.013 21:11:54 -- common/autotest_common.sh@955 -- # kill 1025968 00:06:40.013 21:11:54 -- common/autotest_common.sh@960 -- # wait 1025968 00:06:40.956 00:06:40.956 real 0m1.825s 00:06:40.956 user 0m1.733s 00:06:40.956 sys 0m0.503s 00:06:40.956 21:11:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.956 21:11:55 -- common/autotest_common.sh@10 -- # set +x 00:06:40.956 ************************************ 00:06:40.956 END TEST default_locks_via_rpc 00:06:40.956 ************************************ 00:06:40.956 21:11:55 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:40.956 21:11:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.956 21:11:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.956 21:11:55 -- common/autotest_common.sh@10 -- # set +x 00:06:40.956 ************************************ 00:06:40.956 START TEST non_locking_app_on_locked_coremask 00:06:40.956 ************************************ 00:06:40.956 21:11:55 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:06:40.956 21:11:55 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1026316 00:06:40.956 21:11:55 -- event/cpu_locks.sh@81 -- # waitforlisten 1026316 /var/tmp/spdk.sock 00:06:40.956 21:11:55 -- common/autotest_common.sh@817 -- # '[' -z 1026316 ']' 00:06:40.956 21:11:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.956 21:11:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:40.956 21:11:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.956 21:11:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:40.956 21:11:55 -- common/autotest_common.sh@10 -- # set +x 00:06:40.956 21:11:55 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.956 [2024-04-24 21:11:55.810341] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:40.956 [2024-04-24 21:11:55.810448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026316 ] 00:06:40.956 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.217 [2024-04-24 21:11:55.927781] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.217 [2024-04-24 21:11:56.039999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.790 21:11:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:41.790 21:11:56 -- common/autotest_common.sh@850 -- # return 0 00:06:41.790 21:11:56 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1026608 00:06:41.790 21:11:56 -- event/cpu_locks.sh@85 -- # waitforlisten 1026608 /var/tmp/spdk2.sock 00:06:41.790 21:11:56 -- common/autotest_common.sh@817 -- # '[' -z 1026608 ']' 00:06:41.790 21:11:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.790 21:11:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:41.790 21:11:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.790 21:11:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:41.790 21:11:56 -- common/autotest_common.sh@10 -- # set +x 00:06:41.790 21:11:56 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:41.790 [2024-04-24 21:11:56.578315] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:41.790 [2024-04-24 21:11:56.578427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026608 ] 00:06:41.790 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.790 [2024-04-24 21:11:56.728692] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.790 [2024-04-24 21:11:56.728731] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.051 [2024-04-24 21:11:56.921621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.017 21:11:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:43.017 21:11:57 -- common/autotest_common.sh@850 -- # return 0 00:06:43.017 21:11:57 -- event/cpu_locks.sh@87 -- # locks_exist 1026316 00:06:43.017 21:11:57 -- event/cpu_locks.sh@22 -- # lslocks -p 1026316 00:06:43.017 21:11:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.017 lslocks: write error 00:06:43.017 21:11:57 -- event/cpu_locks.sh@89 -- # killprocess 1026316 00:06:43.017 21:11:57 -- common/autotest_common.sh@936 -- # '[' -z 1026316 ']' 00:06:43.017 21:11:57 -- common/autotest_common.sh@940 -- # kill -0 1026316 00:06:43.017 21:11:57 -- common/autotest_common.sh@941 -- # uname 00:06:43.017 21:11:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:43.017 21:11:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1026316 00:06:43.308 21:11:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:43.308 21:11:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:43.308 21:11:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1026316' 00:06:43.308 killing process with pid 1026316 00:06:43.308 21:11:57 -- common/autotest_common.sh@955 -- # kill 1026316 00:06:43.308 21:11:57 -- common/autotest_common.sh@960 -- # wait 1026316 00:06:44.693 21:11:59 -- event/cpu_locks.sh@90 -- # killprocess 1026608 00:06:44.693 21:11:59 -- common/autotest_common.sh@936 -- # '[' -z 1026608 ']' 00:06:44.693 21:11:59 -- common/autotest_common.sh@940 -- # kill -0 1026608 00:06:44.693 21:11:59 -- common/autotest_common.sh@941 -- # uname 00:06:44.693 21:11:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:44.693 21:11:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1026608 00:06:44.954 21:11:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:44.954 21:11:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:44.954 21:11:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1026608' 00:06:44.954 killing process with pid 1026608 00:06:44.954 21:11:59 -- common/autotest_common.sh@955 -- # kill 1026608 00:06:44.954 21:11:59 -- common/autotest_common.sh@960 -- # wait 1026608 00:06:45.895 00:06:45.895 real 0m4.775s 00:06:45.895 user 0m4.795s 00:06:45.895 sys 0m1.014s 00:06:45.895 21:12:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:45.895 21:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:45.895 ************************************ 00:06:45.895 END TEST non_locking_app_on_locked_coremask 00:06:45.895 ************************************ 00:06:45.895 21:12:00 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:45.895 21:12:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.895 21:12:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.895 21:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:45.895 ************************************ 00:06:45.895 START TEST locking_app_on_unlocked_coremask 00:06:45.895 ************************************ 00:06:45.895 21:12:00 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:06:45.895 21:12:00 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1027344 00:06:45.895 21:12:00 -- event/cpu_locks.sh@99 -- # waitforlisten 1027344 /var/tmp/spdk.sock 00:06:45.895 21:12:00 -- common/autotest_common.sh@817 -- # '[' -z 1027344 ']' 00:06:45.895 21:12:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.895 21:12:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:45.895 21:12:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.895 21:12:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:45.895 21:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:45.895 21:12:00 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:45.895 [2024-04-24 21:12:00.713061] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:45.895 [2024-04-24 21:12:00.713167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027344 ] 00:06:45.895 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.895 [2024-04-24 21:12:00.831194] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.895 [2024-04-24 21:12:00.831230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.155 [2024-04-24 21:12:00.929738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.727 21:12:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:46.727 21:12:01 -- common/autotest_common.sh@850 -- # return 0 00:06:46.727 21:12:01 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1027649 00:06:46.727 21:12:01 -- event/cpu_locks.sh@103 -- # waitforlisten 1027649 /var/tmp/spdk2.sock 00:06:46.727 21:12:01 -- common/autotest_common.sh@817 -- # '[' -z 1027649 ']' 00:06:46.727 21:12:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.727 21:12:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:46.727 21:12:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.727 21:12:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:46.727 21:12:01 -- common/autotest_common.sh@10 -- # set +x 00:06:46.727 21:12:01 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.727 [2024-04-24 21:12:01.481392] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:46.727 [2024-04-24 21:12:01.481505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1027649 ] 00:06:46.727 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.727 [2024-04-24 21:12:01.636579] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.988 [2024-04-24 21:12:01.835513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.929 21:12:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:47.929 21:12:02 -- common/autotest_common.sh@850 -- # return 0 00:06:47.929 21:12:02 -- event/cpu_locks.sh@105 -- # locks_exist 1027649 00:06:47.929 21:12:02 -- event/cpu_locks.sh@22 -- # lslocks -p 1027649 00:06:47.929 21:12:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.929 lslocks: write error 00:06:47.929 21:12:02 -- event/cpu_locks.sh@107 -- # killprocess 1027344 00:06:47.929 21:12:02 -- common/autotest_common.sh@936 -- # '[' -z 1027344 ']' 00:06:47.929 21:12:02 -- common/autotest_common.sh@940 -- # kill -0 1027344 00:06:47.929 21:12:02 -- common/autotest_common.sh@941 -- # uname 00:06:47.929 21:12:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:47.929 21:12:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1027344 00:06:48.190 21:12:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:48.190 21:12:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:48.190 21:12:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1027344' 00:06:48.190 killing process with pid 1027344 00:06:48.190 21:12:02 -- common/autotest_common.sh@955 -- # kill 1027344 00:06:48.190 21:12:02 -- common/autotest_common.sh@960 -- # wait 1027344 00:06:50.103 21:12:04 -- event/cpu_locks.sh@108 -- # killprocess 1027649 00:06:50.103 21:12:04 -- common/autotest_common.sh@936 -- # '[' -z 1027649 ']' 00:06:50.103 21:12:04 -- common/autotest_common.sh@940 -- # kill -0 1027649 00:06:50.103 21:12:04 -- common/autotest_common.sh@941 -- # uname 00:06:50.103 21:12:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.103 21:12:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1027649 00:06:50.103 21:12:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:50.103 21:12:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:50.103 21:12:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1027649' 00:06:50.103 killing process with pid 1027649 00:06:50.103 21:12:04 -- common/autotest_common.sh@955 -- # kill 1027649 00:06:50.103 21:12:04 -- common/autotest_common.sh@960 -- # wait 1027649 00:06:50.675 00:06:50.675 real 0m4.942s 00:06:50.675 user 0m4.935s 00:06:50.675 sys 0m1.017s 00:06:50.675 21:12:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:50.675 21:12:05 -- common/autotest_common.sh@10 -- # set +x 00:06:50.675 ************************************ 00:06:50.675 END TEST locking_app_on_unlocked_coremask 00:06:50.675 ************************************ 00:06:50.675 21:12:05 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:50.675 21:12:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:50.675 21:12:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.675 21:12:05 -- common/autotest_common.sh@10 -- # set +x 00:06:50.936 ************************************ 00:06:50.936 START TEST locking_app_on_locked_coremask 00:06:50.936 ************************************ 00:06:50.936 21:12:05 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:06:50.936 21:12:05 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1028532 00:06:50.936 21:12:05 -- event/cpu_locks.sh@116 -- # waitforlisten 1028532 /var/tmp/spdk.sock 00:06:50.936 21:12:05 -- common/autotest_common.sh@817 -- # '[' -z 1028532 ']' 00:06:50.936 21:12:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.936 21:12:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:50.936 21:12:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.936 21:12:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:50.936 21:12:05 -- common/autotest_common.sh@10 -- # set +x 00:06:50.936 21:12:05 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.936 [2024-04-24 21:12:05.785489] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:50.936 [2024-04-24 21:12:05.785591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028532 ] 00:06:50.936 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.936 [2024-04-24 21:12:05.900381] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.197 [2024-04-24 21:12:05.997492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.769 21:12:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:51.769 21:12:06 -- common/autotest_common.sh@850 -- # return 0 00:06:51.769 21:12:06 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1028801 00:06:51.769 21:12:06 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1028801 /var/tmp/spdk2.sock 00:06:51.769 21:12:06 -- common/autotest_common.sh@638 -- # local es=0 00:06:51.769 21:12:06 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1028801 /var/tmp/spdk2.sock 00:06:51.769 21:12:06 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:51.769 21:12:06 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:51.769 21:12:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:51.769 21:12:06 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:51.769 21:12:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:51.769 21:12:06 -- common/autotest_common.sh@641 -- # waitforlisten 1028801 /var/tmp/spdk2.sock 00:06:51.769 21:12:06 -- common/autotest_common.sh@817 -- # '[' -z 1028801 ']' 00:06:51.769 21:12:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.769 21:12:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:51.769 21:12:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.769 21:12:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:51.769 21:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:51.769 [2024-04-24 21:12:06.554435] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:51.769 [2024-04-24 21:12:06.554549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028801 ] 00:06:51.769 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.769 [2024-04-24 21:12:06.708582] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1028532 has claimed it. 00:06:51.769 [2024-04-24 21:12:06.708630] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:52.339 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1028801) - No such process 00:06:52.339 ERROR: process (pid: 1028801) is no longer running 00:06:52.339 21:12:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:52.339 21:12:07 -- common/autotest_common.sh@850 -- # return 1 00:06:52.339 21:12:07 -- common/autotest_common.sh@641 -- # es=1 00:06:52.339 21:12:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:52.339 21:12:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:52.339 21:12:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:52.339 21:12:07 -- event/cpu_locks.sh@122 -- # locks_exist 1028532 00:06:52.339 21:12:07 -- event/cpu_locks.sh@22 -- # lslocks -p 1028532 00:06:52.339 21:12:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.339 lslocks: write error 00:06:52.339 21:12:07 -- event/cpu_locks.sh@124 -- # killprocess 1028532 00:06:52.339 21:12:07 -- common/autotest_common.sh@936 -- # '[' -z 1028532 ']' 00:06:52.339 21:12:07 -- common/autotest_common.sh@940 -- # kill -0 1028532 00:06:52.339 21:12:07 -- common/autotest_common.sh@941 -- # uname 00:06:52.339 21:12:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:52.339 21:12:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1028532 00:06:52.339 21:12:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:52.339 21:12:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:52.339 21:12:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1028532' 00:06:52.339 killing process with pid 1028532 00:06:52.339 21:12:07 -- common/autotest_common.sh@955 -- # kill 1028532 00:06:52.339 21:12:07 -- common/autotest_common.sh@960 -- # wait 1028532 00:06:53.281 00:06:53.281 real 0m2.413s 00:06:53.281 user 0m2.452s 00:06:53.281 sys 0m0.604s 00:06:53.281 21:12:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:53.281 21:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:53.281 ************************************ 00:06:53.281 END TEST locking_app_on_locked_coremask 00:06:53.281 ************************************ 00:06:53.281 21:12:08 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:53.281 21:12:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:53.281 21:12:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.281 21:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:53.281 ************************************ 00:06:53.281 START TEST locking_overlapped_coremask 00:06:53.281 ************************************ 00:06:53.281 21:12:08 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:06:53.281 21:12:08 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1029375 00:06:53.281 21:12:08 -- event/cpu_locks.sh@133 -- # waitforlisten 1029375 /var/tmp/spdk.sock 00:06:53.281 21:12:08 -- common/autotest_common.sh@817 -- # '[' -z 1029375 ']' 00:06:53.281 21:12:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.281 21:12:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:53.281 21:12:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.281 21:12:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:53.281 21:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:53.281 21:12:08 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:53.542 [2024-04-24 21:12:08.308931] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:53.542 [2024-04-24 21:12:08.309000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029375 ] 00:06:53.542 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.542 [2024-04-24 21:12:08.397770] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.542 [2024-04-24 21:12:08.498486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.542 [2024-04-24 21:12:08.498523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.542 [2024-04-24 21:12:08.498529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.113 21:12:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:54.113 21:12:09 -- common/autotest_common.sh@850 -- # return 0 00:06:54.113 21:12:09 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1029669 00:06:54.113 21:12:09 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1029669 /var/tmp/spdk2.sock 00:06:54.113 21:12:09 -- common/autotest_common.sh@638 -- # local es=0 00:06:54.113 21:12:09 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1029669 /var/tmp/spdk2.sock 00:06:54.113 21:12:09 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:54.113 21:12:09 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:54.113 21:12:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:54.113 21:12:09 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:54.113 21:12:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:54.113 21:12:09 -- common/autotest_common.sh@641 -- # waitforlisten 1029669 /var/tmp/spdk2.sock 00:06:54.113 21:12:09 -- common/autotest_common.sh@817 -- # '[' -z 1029669 ']' 00:06:54.113 21:12:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.113 21:12:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:54.113 21:12:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.113 21:12:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:54.113 21:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.374 [2024-04-24 21:12:09.130563] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:54.374 [2024-04-24 21:12:09.130707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029669 ] 00:06:54.374 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.374 [2024-04-24 21:12:09.301771] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1029375 has claimed it. 00:06:54.374 [2024-04-24 21:12:09.301819] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.946 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1029669) - No such process 00:06:54.946 ERROR: process (pid: 1029669) is no longer running 00:06:54.946 21:12:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:54.946 21:12:09 -- common/autotest_common.sh@850 -- # return 1 00:06:54.946 21:12:09 -- common/autotest_common.sh@641 -- # es=1 00:06:54.946 21:12:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:54.946 21:12:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:54.946 21:12:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:54.946 21:12:09 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:54.946 21:12:09 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.946 21:12:09 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.946 21:12:09 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.946 21:12:09 -- event/cpu_locks.sh@141 -- # killprocess 1029375 00:06:54.946 21:12:09 -- common/autotest_common.sh@936 -- # '[' -z 1029375 ']' 00:06:54.946 21:12:09 -- common/autotest_common.sh@940 -- # kill -0 1029375 00:06:54.946 21:12:09 -- common/autotest_common.sh@941 -- # uname 00:06:54.946 21:12:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:54.946 21:12:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1029375 00:06:54.946 21:12:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:54.946 21:12:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:54.946 21:12:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1029375' 00:06:54.946 killing process with pid 1029375 00:06:54.946 21:12:09 -- common/autotest_common.sh@955 -- # kill 1029375 00:06:54.946 21:12:09 -- common/autotest_common.sh@960 -- # wait 1029375 00:06:55.889 00:06:55.889 real 0m2.355s 00:06:55.889 user 0m6.238s 00:06:55.889 sys 0m0.563s 00:06:55.889 21:12:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:55.889 21:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.889 ************************************ 00:06:55.889 END TEST locking_overlapped_coremask 00:06:55.889 ************************************ 00:06:55.889 21:12:10 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:55.889 21:12:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.889 21:12:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.889 21:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.889 ************************************ 00:06:55.889 START TEST locking_overlapped_coremask_via_rpc 00:06:55.889 ************************************ 00:06:55.889 21:12:10 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:06:55.889 21:12:10 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1030011 00:06:55.889 21:12:10 -- event/cpu_locks.sh@149 -- # waitforlisten 1030011 /var/tmp/spdk.sock 00:06:55.889 21:12:10 -- common/autotest_common.sh@817 -- # '[' -z 1030011 ']' 00:06:55.889 21:12:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.889 21:12:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:55.889 21:12:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.889 21:12:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:55.889 21:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.889 21:12:10 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:55.889 [2024-04-24 21:12:10.843380] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:55.889 [2024-04-24 21:12:10.843508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030011 ] 00:06:56.151 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.151 [2024-04-24 21:12:10.974058] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.151 [2024-04-24 21:12:10.974100] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.151 [2024-04-24 21:12:11.072782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.151 [2024-04-24 21:12:11.072878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.151 [2024-04-24 21:12:11.072885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.724 21:12:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:56.724 21:12:11 -- common/autotest_common.sh@850 -- # return 0 00:06:56.724 21:12:11 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1030033 00:06:56.724 21:12:11 -- event/cpu_locks.sh@153 -- # waitforlisten 1030033 /var/tmp/spdk2.sock 00:06:56.724 21:12:11 -- common/autotest_common.sh@817 -- # '[' -z 1030033 ']' 00:06:56.724 21:12:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.724 21:12:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:56.724 21:12:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.724 21:12:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:56.724 21:12:11 -- common/autotest_common.sh@10 -- # set +x 00:06:56.724 21:12:11 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:56.724 [2024-04-24 21:12:11.654830] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:06:56.724 [2024-04-24 21:12:11.654973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030033 ] 00:06:56.986 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.986 [2024-04-24 21:12:11.829210] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.986 [2024-04-24 21:12:11.829255] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.247 [2024-04-24 21:12:12.029788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.247 [2024-04-24 21:12:12.029913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.247 [2024-04-24 21:12:12.029945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:57.820 21:12:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:57.820 21:12:12 -- common/autotest_common.sh@850 -- # return 0 00:06:57.820 21:12:12 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:57.820 21:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:57.820 21:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:57.820 21:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:57.820 21:12:12 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.820 21:12:12 -- common/autotest_common.sh@638 -- # local es=0 00:06:57.820 21:12:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.820 21:12:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:57.820 21:12:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:57.820 21:12:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:57.820 21:12:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:57.820 21:12:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.820 21:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:57.820 21:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:57.820 [2024-04-24 21:12:12.782402] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1030011 has claimed it. 00:06:58.081 request: 00:06:58.081 { 00:06:58.081 "method": "framework_enable_cpumask_locks", 00:06:58.081 "req_id": 1 00:06:58.081 } 00:06:58.081 Got JSON-RPC error response 00:06:58.081 response: 00:06:58.081 { 00:06:58.081 "code": -32603, 00:06:58.081 "message": "Failed to claim CPU core: 2" 00:06:58.081 } 00:06:58.081 21:12:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:58.081 21:12:12 -- common/autotest_common.sh@641 -- # es=1 00:06:58.081 21:12:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:58.081 21:12:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:58.081 21:12:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:58.081 21:12:12 -- event/cpu_locks.sh@158 -- # waitforlisten 1030011 /var/tmp/spdk.sock 00:06:58.081 21:12:12 -- common/autotest_common.sh@817 -- # '[' -z 1030011 ']' 00:06:58.081 21:12:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.081 21:12:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:58.081 21:12:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.081 21:12:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:58.081 21:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.081 21:12:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:58.081 21:12:12 -- common/autotest_common.sh@850 -- # return 0 00:06:58.081 21:12:12 -- event/cpu_locks.sh@159 -- # waitforlisten 1030033 /var/tmp/spdk2.sock 00:06:58.081 21:12:12 -- common/autotest_common.sh@817 -- # '[' -z 1030033 ']' 00:06:58.081 21:12:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.081 21:12:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:58.081 21:12:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.081 21:12:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:58.081 21:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.340 21:12:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:58.340 21:12:13 -- common/autotest_common.sh@850 -- # return 0 00:06:58.340 21:12:13 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:58.340 21:12:13 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:58.340 21:12:13 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:58.340 21:12:13 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:58.340 00:06:58.340 real 0m2.360s 00:06:58.340 user 0m0.735s 00:06:58.340 sys 0m0.152s 00:06:58.340 21:12:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.340 21:12:13 -- common/autotest_common.sh@10 -- # set +x 00:06:58.340 ************************************ 00:06:58.340 END TEST locking_overlapped_coremask_via_rpc 00:06:58.340 ************************************ 00:06:58.340 21:12:13 -- event/cpu_locks.sh@174 -- # cleanup 00:06:58.340 21:12:13 -- event/cpu_locks.sh@15 -- # [[ -z 1030011 ]] 00:06:58.340 21:12:13 -- event/cpu_locks.sh@15 -- # killprocess 1030011 00:06:58.340 21:12:13 -- common/autotest_common.sh@936 -- # '[' -z 1030011 ']' 00:06:58.340 21:12:13 -- common/autotest_common.sh@940 -- # kill -0 1030011 00:06:58.340 21:12:13 -- common/autotest_common.sh@941 -- # uname 00:06:58.340 21:12:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.340 21:12:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1030011 00:06:58.340 21:12:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:58.340 21:12:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:58.340 21:12:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1030011' 00:06:58.340 killing process with pid 1030011 00:06:58.340 21:12:13 -- common/autotest_common.sh@955 -- # kill 1030011 00:06:58.340 21:12:13 -- common/autotest_common.sh@960 -- # wait 1030011 00:06:59.285 21:12:14 -- event/cpu_locks.sh@16 -- # [[ -z 1030033 ]] 00:06:59.285 21:12:14 -- event/cpu_locks.sh@16 -- # killprocess 1030033 00:06:59.285 21:12:14 -- common/autotest_common.sh@936 -- # '[' -z 1030033 ']' 00:06:59.285 21:12:14 -- common/autotest_common.sh@940 -- # kill -0 1030033 00:06:59.285 21:12:14 -- common/autotest_common.sh@941 -- # uname 00:06:59.285 21:12:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:59.285 21:12:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1030033 00:06:59.285 21:12:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:59.285 21:12:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:59.285 21:12:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1030033' 00:06:59.285 killing process with pid 1030033 00:06:59.285 21:12:14 -- common/autotest_common.sh@955 -- # kill 1030033 00:06:59.285 21:12:14 -- common/autotest_common.sh@960 -- # wait 1030033 00:07:00.228 21:12:14 -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.228 21:12:14 -- event/cpu_locks.sh@1 -- # cleanup 00:07:00.228 21:12:14 -- event/cpu_locks.sh@15 -- # [[ -z 1030011 ]] 00:07:00.228 21:12:14 -- event/cpu_locks.sh@15 -- # killprocess 1030011 00:07:00.228 21:12:14 -- common/autotest_common.sh@936 -- # '[' -z 1030011 ']' 00:07:00.228 21:12:14 -- common/autotest_common.sh@940 -- # kill -0 1030011 00:07:00.228 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1030011) - No such process 00:07:00.228 21:12:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1030011 is not found' 00:07:00.228 Process with pid 1030011 is not found 00:07:00.228 21:12:14 -- event/cpu_locks.sh@16 -- # [[ -z 1030033 ]] 00:07:00.228 21:12:14 -- event/cpu_locks.sh@16 -- # killprocess 1030033 00:07:00.228 21:12:14 -- common/autotest_common.sh@936 -- # '[' -z 1030033 ']' 00:07:00.228 21:12:14 -- common/autotest_common.sh@940 -- # kill -0 1030033 00:07:00.228 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1030033) - No such process 00:07:00.228 21:12:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1030033 is not found' 00:07:00.228 Process with pid 1030033 is not found 00:07:00.228 21:12:14 -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.228 00:07:00.228 real 0m23.289s 00:07:00.228 user 0m37.744s 00:07:00.228 sys 0m5.652s 00:07:00.228 21:12:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:00.228 21:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:00.228 ************************************ 00:07:00.228 END TEST cpu_locks 00:07:00.228 ************************************ 00:07:00.228 00:07:00.228 real 0m47.717s 00:07:00.228 user 1m23.778s 00:07:00.228 sys 0m9.243s 00:07:00.228 21:12:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:00.228 21:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:00.228 ************************************ 00:07:00.228 END TEST event 00:07:00.228 ************************************ 00:07:00.228 21:12:14 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:07:00.228 21:12:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.228 21:12:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.228 21:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:00.228 ************************************ 00:07:00.228 START TEST thread 00:07:00.228 ************************************ 00:07:00.228 21:12:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:07:00.228 * Looking for test storage... 00:07:00.228 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread 00:07:00.228 21:12:15 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.228 21:12:15 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:00.228 21:12:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.228 21:12:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.489 ************************************ 00:07:00.489 START TEST thread_poller_perf 00:07:00.489 ************************************ 00:07:00.489 21:12:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.489 [2024-04-24 21:12:15.241504] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:07:00.489 [2024-04-24 21:12:15.241610] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030982 ] 00:07:00.489 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.489 [2024-04-24 21:12:15.357903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.749 [2024-04-24 21:12:15.454310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.749 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:01.692 ====================================== 00:07:01.692 busy:1905269526 (cyc) 00:07:01.692 total_run_count: 404000 00:07:01.692 tsc_hz: 1900000000 (cyc) 00:07:01.692 ====================================== 00:07:01.692 poller_cost: 4716 (cyc), 2482 (nsec) 00:07:01.692 00:07:01.692 real 0m1.410s 00:07:01.692 user 0m1.277s 00:07:01.692 sys 0m0.127s 00:07:01.692 21:12:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:01.692 21:12:16 -- common/autotest_common.sh@10 -- # set +x 00:07:01.692 ************************************ 00:07:01.692 END TEST thread_poller_perf 00:07:01.692 ************************************ 00:07:01.692 21:12:16 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.692 21:12:16 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:01.692 21:12:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.692 21:12:16 -- common/autotest_common.sh@10 -- # set +x 00:07:01.953 ************************************ 00:07:01.953 START TEST thread_poller_perf 00:07:01.953 ************************************ 00:07:01.953 21:12:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.953 [2024-04-24 21:12:16.769684] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:07:01.953 [2024-04-24 21:12:16.769789] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031283 ] 00:07:01.953 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.953 [2024-04-24 21:12:16.889604] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.214 [2024-04-24 21:12:16.981354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.214 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:03.599 ====================================== 00:07:03.599 busy:1901815360 (cyc) 00:07:03.599 total_run_count: 5328000 00:07:03.599 tsc_hz: 1900000000 (cyc) 00:07:03.599 ====================================== 00:07:03.599 poller_cost: 356 (cyc), 187 (nsec) 00:07:03.599 00:07:03.599 real 0m1.414s 00:07:03.599 user 0m1.289s 00:07:03.599 sys 0m0.121s 00:07:03.599 21:12:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:03.599 21:12:18 -- common/autotest_common.sh@10 -- # set +x 00:07:03.599 ************************************ 00:07:03.599 END TEST thread_poller_perf 00:07:03.599 ************************************ 00:07:03.599 21:12:18 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:03.599 00:07:03.599 real 0m3.134s 00:07:03.599 user 0m2.671s 00:07:03.599 sys 0m0.444s 00:07:03.599 21:12:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:03.599 21:12:18 -- common/autotest_common.sh@10 -- # set +x 00:07:03.599 ************************************ 00:07:03.599 END TEST thread 00:07:03.599 ************************************ 00:07:03.599 21:12:18 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:07:03.599 21:12:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:03.599 21:12:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.599 21:12:18 -- common/autotest_common.sh@10 -- # set +x 00:07:03.599 ************************************ 00:07:03.599 START TEST accel 00:07:03.599 ************************************ 00:07:03.599 21:12:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:07:03.599 * Looking for test storage... 00:07:03.599 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:07:03.599 21:12:18 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:03.599 21:12:18 -- accel/accel.sh@82 -- # get_expected_opcs 00:07:03.599 21:12:18 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:03.599 21:12:18 -- accel/accel.sh@62 -- # spdk_tgt_pid=1031711 00:07:03.599 21:12:18 -- accel/accel.sh@63 -- # waitforlisten 1031711 00:07:03.599 21:12:18 -- common/autotest_common.sh@817 -- # '[' -z 1031711 ']' 00:07:03.599 21:12:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.599 21:12:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:03.599 21:12:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.599 21:12:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:03.599 21:12:18 -- common/autotest_common.sh@10 -- # set +x 00:07:03.599 21:12:18 -- accel/accel.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:03.599 21:12:18 -- accel/accel.sh@61 -- # build_accel_config 00:07:03.599 21:12:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.599 21:12:18 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:03.599 21:12:18 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:03.599 21:12:18 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:03.599 21:12:18 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:03.599 21:12:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.599 21:12:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.599 21:12:18 -- accel/accel.sh@40 -- # local IFS=, 00:07:03.599 21:12:18 -- accel/accel.sh@41 -- # jq -r . 00:07:03.599 [2024-04-24 21:12:18.440297] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:07:03.599 [2024-04-24 21:12:18.440406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031711 ] 00:07:03.599 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.599 [2024-04-24 21:12:18.555820] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.861 [2024-04-24 21:12:18.650960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.861 [2024-04-24 21:12:18.655477] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:03.861 [2024-04-24 21:12:18.663437] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:13.859 21:12:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:13.859 21:12:27 -- common/autotest_common.sh@850 -- # return 0 00:07:13.859 21:12:27 -- accel/accel.sh@65 -- # [[ 1 -gt 0 ]] 00:07:13.859 21:12:27 -- accel/accel.sh@65 -- # check_save_config dsa_scan_accel_module 00:07:13.859 21:12:27 -- accel/accel.sh@56 -- # rpc_cmd save_config 00:07:13.859 21:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.859 21:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.859 21:12:27 -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:07:13.859 21:12:27 -- accel/accel.sh@56 -- # grep dsa_scan_accel_module 00:07:13.859 21:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.859 "method": "dsa_scan_accel_module", 00:07:13.859 21:12:27 -- accel/accel.sh@66 -- # [[ 1 -gt 0 ]] 00:07:13.859 21:12:27 -- accel/accel.sh@66 -- # check_save_config iaa_scan_accel_module 00:07:13.859 21:12:27 -- accel/accel.sh@56 -- # rpc_cmd save_config 00:07:13.859 21:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.859 21:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.859 21:12:27 -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:07:13.859 21:12:27 -- accel/accel.sh@56 -- # grep iaa_scan_accel_module 00:07:13.859 21:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.859 "method": "iaa_scan_accel_module" 00:07:13.859 21:12:27 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:13.859 21:12:27 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:13.859 21:12:27 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:13.859 21:12:27 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:13.859 21:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.859 21:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.859 21:12:27 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:13.859 21:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=iaa 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=iaa 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:13.859 21:12:27 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # IFS== 00:07:13.859 21:12:27 -- accel/accel.sh@72 -- # read -r opc module 00:07:13.859 21:12:27 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:13.859 21:12:27 -- accel/accel.sh@75 -- # killprocess 1031711 00:07:13.859 21:12:27 -- common/autotest_common.sh@936 -- # '[' -z 1031711 ']' 00:07:13.859 21:12:27 -- common/autotest_common.sh@940 -- # kill -0 1031711 00:07:13.859 21:12:27 -- common/autotest_common.sh@941 -- # uname 00:07:13.859 21:12:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:13.859 21:12:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1031711 00:07:13.859 21:12:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:13.859 21:12:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:13.859 21:12:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1031711' 00:07:13.859 killing process with pid 1031711 00:07:13.859 21:12:27 -- common/autotest_common.sh@955 -- # kill 1031711 00:07:13.859 21:12:27 -- common/autotest_common.sh@960 -- # wait 1031711 00:07:17.160 21:12:31 -- accel/accel.sh@76 -- # trap - ERR 00:07:17.160 21:12:31 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:17.160 21:12:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:17.160 21:12:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.160 21:12:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.160 21:12:31 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:07:17.160 21:12:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:17.160 21:12:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.160 21:12:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.160 21:12:31 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:17.160 21:12:31 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:17.160 21:12:31 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:17.160 21:12:31 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:17.160 21:12:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.160 21:12:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.160 21:12:31 -- accel/accel.sh@40 -- # local IFS=, 00:07:17.160 21:12:31 -- accel/accel.sh@41 -- # jq -r . 00:07:17.160 21:12:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.160 21:12:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.160 21:12:31 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:17.160 21:12:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:17.160 21:12:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.160 21:12:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.160 ************************************ 00:07:17.160 START TEST accel_missing_filename 00:07:17.160 ************************************ 00:07:17.160 21:12:31 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:07:17.160 21:12:31 -- common/autotest_common.sh@638 -- # local es=0 00:07:17.160 21:12:31 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:17.160 21:12:31 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:17.160 21:12:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:17.160 21:12:31 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:17.160 21:12:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:17.160 21:12:31 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:07:17.160 21:12:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:17.160 21:12:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.160 21:12:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.160 21:12:31 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:17.160 21:12:31 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:17.160 21:12:31 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:17.160 21:12:31 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:17.160 21:12:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.160 21:12:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.160 21:12:31 -- accel/accel.sh@40 -- # local IFS=, 00:07:17.160 21:12:31 -- accel/accel.sh@41 -- # jq -r . 00:07:17.160 [2024-04-24 21:12:31.900503] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:07:17.160 [2024-04-24 21:12:31.900678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034341 ] 00:07:17.160 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.160 [2024-04-24 21:12:32.040897] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.421 [2024-04-24 21:12:32.138337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.421 [2024-04-24 21:12:32.142856] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:17.421 [2024-04-24 21:12:32.150823] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:24.118 [2024-04-24 21:12:38.533124] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.496 [2024-04-24 21:12:40.392146] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:25.754 A filename is required. 00:07:25.754 21:12:40 -- common/autotest_common.sh@641 -- # es=234 00:07:25.754 21:12:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:25.754 21:12:40 -- common/autotest_common.sh@650 -- # es=106 00:07:25.754 21:12:40 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:25.754 21:12:40 -- common/autotest_common.sh@658 -- # es=1 00:07:25.754 21:12:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:25.754 00:07:25.754 real 0m8.690s 00:07:25.754 user 0m2.299s 00:07:25.754 sys 0m0.253s 00:07:25.754 21:12:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:25.754 21:12:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.754 ************************************ 00:07:25.754 END TEST accel_missing_filename 00:07:25.754 ************************************ 00:07:25.754 21:12:40 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:07:25.754 21:12:40 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:25.754 21:12:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.754 21:12:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.754 ************************************ 00:07:25.754 START TEST accel_compress_verify 00:07:25.754 ************************************ 00:07:25.754 21:12:40 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:07:25.754 21:12:40 -- common/autotest_common.sh@638 -- # local es=0 00:07:25.754 21:12:40 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:07:25.754 21:12:40 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:25.754 21:12:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:25.754 21:12:40 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:25.754 21:12:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:25.754 21:12:40 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:07:25.754 21:12:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:07:25.754 21:12:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.754 21:12:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.754 21:12:40 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:25.754 21:12:40 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:25.754 21:12:40 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:25.754 21:12:40 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:25.754 21:12:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.754 21:12:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.754 21:12:40 -- accel/accel.sh@40 -- # local IFS=, 00:07:25.754 21:12:40 -- accel/accel.sh@41 -- # jq -r . 00:07:25.754 [2024-04-24 21:12:40.691198] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:07:25.754 [2024-04-24 21:12:40.691306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036016 ] 00:07:26.015 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.015 [2024-04-24 21:12:40.804206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.015 [2024-04-24 21:12:40.901890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.015 [2024-04-24 21:12:40.906459] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:26.015 [2024-04-24 21:12:40.914426] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:32.673 [2024-04-24 21:12:47.317861] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.582 [2024-04-24 21:12:49.164374] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:34.582 00:07:34.582 Compression does not support the verify option, aborting. 00:07:34.582 21:12:49 -- common/autotest_common.sh@641 -- # es=161 00:07:34.582 21:12:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:34.582 21:12:49 -- common/autotest_common.sh@650 -- # es=33 00:07:34.582 21:12:49 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:34.582 21:12:49 -- common/autotest_common.sh@658 -- # es=1 00:07:34.582 21:12:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:34.582 00:07:34.582 real 0m8.668s 00:07:34.582 user 0m2.277s 00:07:34.582 sys 0m0.246s 00:07:34.582 21:12:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.582 21:12:49 -- common/autotest_common.sh@10 -- # set +x 00:07:34.582 ************************************ 00:07:34.582 END TEST accel_compress_verify 00:07:34.582 ************************************ 00:07:34.582 21:12:49 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:34.582 21:12:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:34.582 21:12:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.582 21:12:49 -- common/autotest_common.sh@10 -- # set +x 00:07:34.582 ************************************ 00:07:34.582 START TEST accel_wrong_workload 00:07:34.582 ************************************ 00:07:34.582 21:12:49 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:07:34.582 21:12:49 -- common/autotest_common.sh@638 -- # local es=0 00:07:34.582 21:12:49 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:34.582 21:12:49 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:34.582 21:12:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.582 21:12:49 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:34.582 21:12:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.583 21:12:49 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:07:34.583 21:12:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:34.583 21:12:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.583 21:12:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.583 21:12:49 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:34.583 21:12:49 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:34.583 21:12:49 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:34.583 21:12:49 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:34.583 21:12:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.583 21:12:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.583 21:12:49 -- accel/accel.sh@40 -- # local IFS=, 00:07:34.583 21:12:49 -- accel/accel.sh@41 -- # jq -r . 00:07:34.583 Unsupported workload type: foobar 00:07:34.583 [2024-04-24 21:12:49.459589] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:34.583 accel_perf options: 00:07:34.583 [-h help message] 00:07:34.583 [-q queue depth per core] 00:07:34.583 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:34.583 [-T number of threads per core 00:07:34.583 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:34.583 [-t time in seconds] 00:07:34.583 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:34.583 [ dif_verify, , dif_generate, dif_generate_copy 00:07:34.583 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:34.583 [-l for compress/decompress workloads, name of uncompressed input file 00:07:34.583 [-S for crc32c workload, use this seed value (default 0) 00:07:34.583 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:34.583 [-f for fill workload, use this BYTE value (default 255) 00:07:34.583 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:34.583 [-y verify result if this switch is on] 00:07:34.583 [-a tasks to allocate per core (default: same value as -q)] 00:07:34.583 Can be used to spread operations across a wider range of memory. 00:07:34.583 21:12:49 -- common/autotest_common.sh@641 -- # es=1 00:07:34.583 21:12:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:34.583 21:12:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:34.583 21:12:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:34.583 00:07:34.583 real 0m0.053s 00:07:34.583 user 0m0.051s 00:07:34.583 sys 0m0.034s 00:07:34.583 21:12:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.583 21:12:49 -- common/autotest_common.sh@10 -- # set +x 00:07:34.583 ************************************ 00:07:34.583 END TEST accel_wrong_workload 00:07:34.583 ************************************ 00:07:34.583 21:12:49 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:34.583 21:12:49 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:34.583 21:12:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.583 21:12:49 -- common/autotest_common.sh@10 -- # set +x 00:07:34.844 ************************************ 00:07:34.844 START TEST accel_negative_buffers 00:07:34.844 ************************************ 00:07:34.844 21:12:49 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:34.844 21:12:49 -- common/autotest_common.sh@638 -- # local es=0 00:07:34.844 21:12:49 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:34.844 21:12:49 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:34.844 21:12:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.844 21:12:49 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:34.844 21:12:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:34.844 21:12:49 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:07:34.844 21:12:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:34.844 21:12:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.844 21:12:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.844 21:12:49 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:34.844 21:12:49 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:34.844 21:12:49 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:34.844 21:12:49 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:34.844 21:12:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.844 21:12:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.844 21:12:49 -- accel/accel.sh@40 -- # local IFS=, 00:07:34.844 21:12:49 -- accel/accel.sh@41 -- # jq -r . 00:07:34.844 -x option must be non-negative. 00:07:34.844 [2024-04-24 21:12:49.611765] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:34.844 accel_perf options: 00:07:34.844 [-h help message] 00:07:34.844 [-q queue depth per core] 00:07:34.844 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:34.844 [-T number of threads per core 00:07:34.844 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:34.844 [-t time in seconds] 00:07:34.844 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:34.844 [ dif_verify, , dif_generate, dif_generate_copy 00:07:34.844 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:34.844 [-l for compress/decompress workloads, name of uncompressed input file 00:07:34.844 [-S for crc32c workload, use this seed value (default 0) 00:07:34.844 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:34.844 [-f for fill workload, use this BYTE value (default 255) 00:07:34.844 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:34.844 [-y verify result if this switch is on] 00:07:34.844 [-a tasks to allocate per core (default: same value as -q)] 00:07:34.844 Can be used to spread operations across a wider range of memory. 00:07:34.844 21:12:49 -- common/autotest_common.sh@641 -- # es=1 00:07:34.844 21:12:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:34.844 21:12:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:34.844 21:12:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:34.844 00:07:34.844 real 0m0.051s 00:07:34.844 user 0m0.058s 00:07:34.844 sys 0m0.023s 00:07:34.844 21:12:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.844 21:12:49 -- common/autotest_common.sh@10 -- # set +x 00:07:34.844 ************************************ 00:07:34.844 END TEST accel_negative_buffers 00:07:34.844 ************************************ 00:07:34.844 21:12:49 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:34.844 21:12:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:34.844 21:12:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.844 21:12:49 -- common/autotest_common.sh@10 -- # set +x 00:07:34.844 ************************************ 00:07:34.844 START TEST accel_crc32c 00:07:34.844 ************************************ 00:07:34.844 21:12:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:34.844 21:12:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.844 21:12:49 -- accel/accel.sh@17 -- # local accel_module 00:07:34.844 21:12:49 -- accel/accel.sh@19 -- # IFS=: 00:07:34.844 21:12:49 -- accel/accel.sh@19 -- # read -r var val 00:07:34.844 21:12:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:34.844 21:12:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:34.844 21:12:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.844 21:12:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.844 21:12:49 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:34.844 21:12:49 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:34.844 21:12:49 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:34.844 21:12:49 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:34.844 21:12:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.844 21:12:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.844 21:12:49 -- accel/accel.sh@40 -- # local IFS=, 00:07:34.844 21:12:49 -- accel/accel.sh@41 -- # jq -r . 00:07:34.844 [2024-04-24 21:12:49.769876] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:07:34.844 [2024-04-24 21:12:49.769980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1037874 ] 00:07:35.104 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.104 [2024-04-24 21:12:49.883982] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.104 [2024-04-24 21:12:49.976789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.104 [2024-04-24 21:12:49.981283] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:35.104 [2024-04-24 21:12:49.989244] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:41.686 21:12:56 -- accel/accel.sh@20 -- # val= 00:07:41.686 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.686 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.686 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.686 21:12:56 -- accel/accel.sh@20 -- # val= 00:07:41.686 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val=0x1 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val= 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val= 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val=crc32c 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val=32 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val= 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val=dsa 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@22 -- # accel_module=dsa 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val=32 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val=32 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val=1 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val=Yes 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val= 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:41.687 21:12:56 -- accel/accel.sh@20 -- # val= 00:07:41.687 21:12:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # IFS=: 00:07:41.687 21:12:56 -- accel/accel.sh@19 -- # read -r var val 00:07:44.981 21:12:59 -- accel/accel.sh@20 -- # val= 00:07:44.981 21:12:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # IFS=: 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # read -r var val 00:07:44.981 21:12:59 -- accel/accel.sh@20 -- # val= 00:07:44.981 21:12:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # IFS=: 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # read -r var val 00:07:44.981 21:12:59 -- accel/accel.sh@20 -- # val= 00:07:44.981 21:12:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # IFS=: 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # read -r var val 00:07:44.981 21:12:59 -- accel/accel.sh@20 -- # val= 00:07:44.981 21:12:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # IFS=: 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # read -r var val 00:07:44.981 21:12:59 -- accel/accel.sh@20 -- # val= 00:07:44.981 21:12:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # IFS=: 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # read -r var val 00:07:44.981 21:12:59 -- accel/accel.sh@20 -- # val= 00:07:44.981 21:12:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # IFS=: 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # read -r var val 00:07:44.981 21:12:59 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:44.981 21:12:59 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:44.981 21:12:59 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:44.981 00:07:44.981 real 0m9.681s 00:07:44.981 user 0m3.291s 00:07:44.981 sys 0m0.224s 00:07:44.981 21:12:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.981 21:12:59 -- common/autotest_common.sh@10 -- # set +x 00:07:44.981 ************************************ 00:07:44.981 END TEST accel_crc32c 00:07:44.981 ************************************ 00:07:44.981 21:12:59 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:44.981 21:12:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:44.981 21:12:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.981 21:12:59 -- common/autotest_common.sh@10 -- # set +x 00:07:44.981 ************************************ 00:07:44.981 START TEST accel_crc32c_C2 00:07:44.981 ************************************ 00:07:44.981 21:12:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:44.981 21:12:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.981 21:12:59 -- accel/accel.sh@17 -- # local accel_module 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # IFS=: 00:07:44.981 21:12:59 -- accel/accel.sh@19 -- # read -r var val 00:07:44.981 21:12:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:44.981 21:12:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:44.981 21:12:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.981 21:12:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.981 21:12:59 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:44.981 21:12:59 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:44.981 21:12:59 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:44.981 21:12:59 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:44.981 21:12:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.981 21:12:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.981 21:12:59 -- accel/accel.sh@40 -- # local IFS=, 00:07:44.981 21:12:59 -- accel/accel.sh@41 -- # jq -r . 00:07:44.981 [2024-04-24 21:12:59.554755] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:07:44.981 [2024-04-24 21:12:59.554866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039914 ] 00:07:44.981 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.981 [2024-04-24 21:12:59.664947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.981 [2024-04-24 21:12:59.758772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.981 [2024-04-24 21:12:59.763226] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:44.981 [2024-04-24 21:12:59.771197] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:51.585 21:13:06 -- accel/accel.sh@20 -- # val= 00:07:51.585 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.585 21:13:06 -- accel/accel.sh@20 -- # val= 00:07:51.585 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.585 21:13:06 -- accel/accel.sh@20 -- # val=0x1 00:07:51.585 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.585 21:13:06 -- accel/accel.sh@20 -- # val= 00:07:51.585 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.585 21:13:06 -- accel/accel.sh@20 -- # val= 00:07:51.585 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.585 21:13:06 -- accel/accel.sh@20 -- # val=crc32c 00:07:51.585 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.585 21:13:06 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.585 21:13:06 -- accel/accel.sh@20 -- # val=0 00:07:51.585 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.585 21:13:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.585 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.585 21:13:06 -- accel/accel.sh@20 -- # val= 00:07:51.585 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.585 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.586 21:13:06 -- accel/accel.sh@20 -- # val=dsa 00:07:51.586 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.586 21:13:06 -- accel/accel.sh@22 -- # accel_module=dsa 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.586 21:13:06 -- accel/accel.sh@20 -- # val=32 00:07:51.586 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.586 21:13:06 -- accel/accel.sh@20 -- # val=32 00:07:51.586 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.586 21:13:06 -- accel/accel.sh@20 -- # val=1 00:07:51.586 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.586 21:13:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.586 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.586 21:13:06 -- accel/accel.sh@20 -- # val=Yes 00:07:51.586 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.586 21:13:06 -- accel/accel.sh@20 -- # val= 00:07:51.586 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:51.586 21:13:06 -- accel/accel.sh@20 -- # val= 00:07:51.586 21:13:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # IFS=: 00:07:51.586 21:13:06 -- accel/accel.sh@19 -- # read -r var val 00:07:54.887 21:13:09 -- accel/accel.sh@20 -- # val= 00:07:54.887 21:13:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # IFS=: 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # read -r var val 00:07:54.887 21:13:09 -- accel/accel.sh@20 -- # val= 00:07:54.887 21:13:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # IFS=: 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # read -r var val 00:07:54.887 21:13:09 -- accel/accel.sh@20 -- # val= 00:07:54.887 21:13:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # IFS=: 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # read -r var val 00:07:54.887 21:13:09 -- accel/accel.sh@20 -- # val= 00:07:54.887 21:13:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # IFS=: 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # read -r var val 00:07:54.887 21:13:09 -- accel/accel.sh@20 -- # val= 00:07:54.887 21:13:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # IFS=: 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # read -r var val 00:07:54.887 21:13:09 -- accel/accel.sh@20 -- # val= 00:07:54.887 21:13:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # IFS=: 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # read -r var val 00:07:54.887 21:13:09 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:54.887 21:13:09 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:54.887 21:13:09 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:54.887 00:07:54.887 real 0m9.656s 00:07:54.887 user 0m3.266s 00:07:54.887 sys 0m0.229s 00:07:54.887 21:13:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.887 21:13:09 -- common/autotest_common.sh@10 -- # set +x 00:07:54.887 ************************************ 00:07:54.887 END TEST accel_crc32c_C2 00:07:54.887 ************************************ 00:07:54.887 21:13:09 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:54.887 21:13:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:54.887 21:13:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.887 21:13:09 -- common/autotest_common.sh@10 -- # set +x 00:07:54.887 ************************************ 00:07:54.887 START TEST accel_copy 00:07:54.887 ************************************ 00:07:54.887 21:13:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:07:54.887 21:13:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.887 21:13:09 -- accel/accel.sh@17 -- # local accel_module 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # IFS=: 00:07:54.887 21:13:09 -- accel/accel.sh@19 -- # read -r var val 00:07:54.887 21:13:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:54.887 21:13:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:54.887 21:13:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.887 21:13:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.887 21:13:09 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:54.887 21:13:09 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:54.887 21:13:09 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:54.887 21:13:09 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:54.887 21:13:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.887 21:13:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.887 21:13:09 -- accel/accel.sh@40 -- # local IFS=, 00:07:54.887 21:13:09 -- accel/accel.sh@41 -- # jq -r . 00:07:54.887 [2024-04-24 21:13:09.315475] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:07:54.887 [2024-04-24 21:13:09.315580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041798 ] 00:07:54.887 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.887 [2024-04-24 21:13:09.430253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.887 [2024-04-24 21:13:09.525852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.887 [2024-04-24 21:13:09.530349] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:54.887 [2024-04-24 21:13:09.538316] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val= 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val= 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val=0x1 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val= 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val= 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val=copy 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@23 -- # accel_opc=copy 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val= 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val=dsa 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@22 -- # accel_module=dsa 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val=32 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val=32 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val=1 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val=Yes 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val= 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:01.475 21:13:15 -- accel/accel.sh@20 -- # val= 00:08:01.475 21:13:15 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # IFS=: 00:08:01.475 21:13:15 -- accel/accel.sh@19 -- # read -r var val 00:08:04.021 21:13:18 -- accel/accel.sh@20 -- # val= 00:08:04.021 21:13:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.021 21:13:18 -- accel/accel.sh@19 -- # IFS=: 00:08:04.021 21:13:18 -- accel/accel.sh@19 -- # read -r var val 00:08:04.021 21:13:18 -- accel/accel.sh@20 -- # val= 00:08:04.021 21:13:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.021 21:13:18 -- accel/accel.sh@19 -- # IFS=: 00:08:04.021 21:13:18 -- accel/accel.sh@19 -- # read -r var val 00:08:04.021 21:13:18 -- accel/accel.sh@20 -- # val= 00:08:04.021 21:13:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.021 21:13:18 -- accel/accel.sh@19 -- # IFS=: 00:08:04.021 21:13:18 -- accel/accel.sh@19 -- # read -r var val 00:08:04.021 21:13:18 -- accel/accel.sh@20 -- # val= 00:08:04.021 21:13:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.021 21:13:18 -- accel/accel.sh@19 -- # IFS=: 00:08:04.021 21:13:18 -- accel/accel.sh@19 -- # read -r var val 00:08:04.021 21:13:18 -- accel/accel.sh@20 -- # val= 00:08:04.021 21:13:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.021 21:13:18 -- accel/accel.sh@19 -- # IFS=: 00:08:04.021 21:13:18 -- accel/accel.sh@19 -- # read -r var val 00:08:04.021 21:13:18 -- accel/accel.sh@20 -- # val= 00:08:04.021 21:13:18 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.021 21:13:18 -- accel/accel.sh@19 -- # IFS=: 00:08:04.021 21:13:18 -- accel/accel.sh@19 -- # read -r var val 00:08:04.021 21:13:18 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:04.021 21:13:18 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:04.021 21:13:18 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:04.021 00:08:04.021 real 0m9.655s 00:08:04.021 user 0m3.255s 00:08:04.021 sys 0m0.230s 00:08:04.021 21:13:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:04.021 21:13:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.021 ************************************ 00:08:04.021 END TEST accel_copy 00:08:04.021 ************************************ 00:08:04.021 21:13:18 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:04.021 21:13:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:04.021 21:13:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.021 21:13:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.282 ************************************ 00:08:04.282 START TEST accel_fill 00:08:04.282 ************************************ 00:08:04.282 21:13:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:04.282 21:13:19 -- accel/accel.sh@16 -- # local accel_opc 00:08:04.282 21:13:19 -- accel/accel.sh@17 -- # local accel_module 00:08:04.282 21:13:19 -- accel/accel.sh@19 -- # IFS=: 00:08:04.282 21:13:19 -- accel/accel.sh@19 -- # read -r var val 00:08:04.282 21:13:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:04.282 21:13:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:04.282 21:13:19 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.282 21:13:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.282 21:13:19 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:04.282 21:13:19 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:04.282 21:13:19 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:04.282 21:13:19 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:04.282 21:13:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.282 21:13:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.282 21:13:19 -- accel/accel.sh@40 -- # local IFS=, 00:08:04.282 21:13:19 -- accel/accel.sh@41 -- # jq -r . 00:08:04.282 [2024-04-24 21:13:19.082639] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:08:04.282 [2024-04-24 21:13:19.082744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043683 ] 00:08:04.282 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.282 [2024-04-24 21:13:19.202303] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.543 [2024-04-24 21:13:19.299182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.543 [2024-04-24 21:13:19.303693] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:04.543 [2024-04-24 21:13:19.311664] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val= 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val= 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val=0x1 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val= 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val= 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val=fill 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@23 -- # accel_opc=fill 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val=0x80 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val= 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val=dsa 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@22 -- # accel_module=dsa 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val=64 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val=64 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val=1 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val=Yes 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val= 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:11.142 21:13:25 -- accel/accel.sh@20 -- # val= 00:08:11.142 21:13:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # IFS=: 00:08:11.142 21:13:25 -- accel/accel.sh@19 -- # read -r var val 00:08:14.447 21:13:28 -- accel/accel.sh@20 -- # val= 00:08:14.447 21:13:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.447 21:13:28 -- accel/accel.sh@19 -- # IFS=: 00:08:14.447 21:13:28 -- accel/accel.sh@19 -- # read -r var val 00:08:14.447 21:13:28 -- accel/accel.sh@20 -- # val= 00:08:14.447 21:13:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.447 21:13:28 -- accel/accel.sh@19 -- # IFS=: 00:08:14.447 21:13:28 -- accel/accel.sh@19 -- # read -r var val 00:08:14.447 21:13:28 -- accel/accel.sh@20 -- # val= 00:08:14.447 21:13:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.447 21:13:28 -- accel/accel.sh@19 -- # IFS=: 00:08:14.447 21:13:28 -- accel/accel.sh@19 -- # read -r var val 00:08:14.447 21:13:28 -- accel/accel.sh@20 -- # val= 00:08:14.447 21:13:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.447 21:13:28 -- accel/accel.sh@19 -- # IFS=: 00:08:14.447 21:13:28 -- accel/accel.sh@19 -- # read -r var val 00:08:14.447 21:13:28 -- accel/accel.sh@20 -- # val= 00:08:14.447 21:13:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.447 21:13:28 -- accel/accel.sh@19 -- # IFS=: 00:08:14.448 21:13:28 -- accel/accel.sh@19 -- # read -r var val 00:08:14.448 21:13:28 -- accel/accel.sh@20 -- # val= 00:08:14.448 21:13:28 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.448 21:13:28 -- accel/accel.sh@19 -- # IFS=: 00:08:14.448 21:13:28 -- accel/accel.sh@19 -- # read -r var val 00:08:14.448 21:13:28 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:14.448 21:13:28 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:14.448 21:13:28 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:14.448 00:08:14.448 real 0m9.678s 00:08:14.448 user 0m3.258s 00:08:14.448 sys 0m0.257s 00:08:14.448 21:13:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:14.448 21:13:28 -- common/autotest_common.sh@10 -- # set +x 00:08:14.448 ************************************ 00:08:14.448 END TEST accel_fill 00:08:14.448 ************************************ 00:08:14.448 21:13:28 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:14.448 21:13:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:14.448 21:13:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.448 21:13:28 -- common/autotest_common.sh@10 -- # set +x 00:08:14.448 ************************************ 00:08:14.448 START TEST accel_copy_crc32c 00:08:14.448 ************************************ 00:08:14.448 21:13:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:08:14.448 21:13:28 -- accel/accel.sh@16 -- # local accel_opc 00:08:14.448 21:13:28 -- accel/accel.sh@17 -- # local accel_module 00:08:14.448 21:13:28 -- accel/accel.sh@19 -- # IFS=: 00:08:14.448 21:13:28 -- accel/accel.sh@19 -- # read -r var val 00:08:14.448 21:13:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:14.448 21:13:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:14.448 21:13:28 -- accel/accel.sh@12 -- # build_accel_config 00:08:14.448 21:13:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.448 21:13:28 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:14.448 21:13:28 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:14.448 21:13:28 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:14.448 21:13:28 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:14.448 21:13:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.448 21:13:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.448 21:13:28 -- accel/accel.sh@40 -- # local IFS=, 00:08:14.448 21:13:28 -- accel/accel.sh@41 -- # jq -r . 00:08:14.448 [2024-04-24 21:13:28.869990] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:08:14.448 [2024-04-24 21:13:28.870094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045698 ] 00:08:14.448 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.448 [2024-04-24 21:13:28.984730] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.448 [2024-04-24 21:13:29.080320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.448 [2024-04-24 21:13:29.084825] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:14.448 [2024-04-24 21:13:29.092794] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val= 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val= 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val=0x1 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val= 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val= 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val=0 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val= 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val=dsa 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@22 -- # accel_module=dsa 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val=32 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val=32 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val=1 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.029 21:13:35 -- accel/accel.sh@20 -- # val=Yes 00:08:21.029 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.029 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.030 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.030 21:13:35 -- accel/accel.sh@20 -- # val= 00:08:21.030 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.030 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.030 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:21.030 21:13:35 -- accel/accel.sh@20 -- # val= 00:08:21.030 21:13:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.030 21:13:35 -- accel/accel.sh@19 -- # IFS=: 00:08:21.030 21:13:35 -- accel/accel.sh@19 -- # read -r var val 00:08:23.573 21:13:38 -- accel/accel.sh@20 -- # val= 00:08:23.573 21:13:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.573 21:13:38 -- accel/accel.sh@19 -- # IFS=: 00:08:23.573 21:13:38 -- accel/accel.sh@19 -- # read -r var val 00:08:23.573 21:13:38 -- accel/accel.sh@20 -- # val= 00:08:23.573 21:13:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.573 21:13:38 -- accel/accel.sh@19 -- # IFS=: 00:08:23.573 21:13:38 -- accel/accel.sh@19 -- # read -r var val 00:08:23.573 21:13:38 -- accel/accel.sh@20 -- # val= 00:08:23.573 21:13:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.573 21:13:38 -- accel/accel.sh@19 -- # IFS=: 00:08:23.573 21:13:38 -- accel/accel.sh@19 -- # read -r var val 00:08:23.573 21:13:38 -- accel/accel.sh@20 -- # val= 00:08:23.573 21:13:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.573 21:13:38 -- accel/accel.sh@19 -- # IFS=: 00:08:23.573 21:13:38 -- accel/accel.sh@19 -- # read -r var val 00:08:23.573 21:13:38 -- accel/accel.sh@20 -- # val= 00:08:23.573 21:13:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.573 21:13:38 -- accel/accel.sh@19 -- # IFS=: 00:08:23.573 21:13:38 -- accel/accel.sh@19 -- # read -r var val 00:08:23.573 21:13:38 -- accel/accel.sh@20 -- # val= 00:08:23.573 21:13:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.573 21:13:38 -- accel/accel.sh@19 -- # IFS=: 00:08:23.573 21:13:38 -- accel/accel.sh@19 -- # read -r var val 00:08:23.573 21:13:38 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:23.573 21:13:38 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:23.573 21:13:38 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:23.573 00:08:23.573 real 0m9.654s 00:08:23.573 user 0m3.265s 00:08:23.573 sys 0m0.223s 00:08:23.573 21:13:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:23.573 21:13:38 -- common/autotest_common.sh@10 -- # set +x 00:08:23.573 ************************************ 00:08:23.573 END TEST accel_copy_crc32c 00:08:23.573 ************************************ 00:08:23.573 21:13:38 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:23.573 21:13:38 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:23.573 21:13:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.573 21:13:38 -- common/autotest_common.sh@10 -- # set +x 00:08:23.834 ************************************ 00:08:23.834 START TEST accel_copy_crc32c_C2 00:08:23.834 ************************************ 00:08:23.834 21:13:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:23.834 21:13:38 -- accel/accel.sh@16 -- # local accel_opc 00:08:23.834 21:13:38 -- accel/accel.sh@17 -- # local accel_module 00:08:23.834 21:13:38 -- accel/accel.sh@19 -- # IFS=: 00:08:23.834 21:13:38 -- accel/accel.sh@19 -- # read -r var val 00:08:23.834 21:13:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:23.834 21:13:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:23.834 21:13:38 -- accel/accel.sh@12 -- # build_accel_config 00:08:23.834 21:13:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.834 21:13:38 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:23.834 21:13:38 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:23.834 21:13:38 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:23.834 21:13:38 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:23.834 21:13:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.834 21:13:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.834 21:13:38 -- accel/accel.sh@40 -- # local IFS=, 00:08:23.834 21:13:38 -- accel/accel.sh@41 -- # jq -r . 00:08:23.834 [2024-04-24 21:13:38.631109] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:08:23.834 [2024-04-24 21:13:38.631212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047533 ] 00:08:23.834 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.834 [2024-04-24 21:13:38.745937] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.095 [2024-04-24 21:13:38.838077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.095 [2024-04-24 21:13:38.842564] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:24.095 [2024-04-24 21:13:38.850530] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val= 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val= 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val=0x1 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val= 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val= 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val=0 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val= 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val=dsa 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@22 -- # accel_module=dsa 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val=32 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val=32 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val=1 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val=Yes 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val= 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:30.745 21:13:45 -- accel/accel.sh@20 -- # val= 00:08:30.745 21:13:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # IFS=: 00:08:30.745 21:13:45 -- accel/accel.sh@19 -- # read -r var val 00:08:33.290 21:13:48 -- accel/accel.sh@20 -- # val= 00:08:33.290 21:13:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.290 21:13:48 -- accel/accel.sh@19 -- # IFS=: 00:08:33.290 21:13:48 -- accel/accel.sh@19 -- # read -r var val 00:08:33.290 21:13:48 -- accel/accel.sh@20 -- # val= 00:08:33.290 21:13:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.290 21:13:48 -- accel/accel.sh@19 -- # IFS=: 00:08:33.291 21:13:48 -- accel/accel.sh@19 -- # read -r var val 00:08:33.291 21:13:48 -- accel/accel.sh@20 -- # val= 00:08:33.291 21:13:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.291 21:13:48 -- accel/accel.sh@19 -- # IFS=: 00:08:33.291 21:13:48 -- accel/accel.sh@19 -- # read -r var val 00:08:33.291 21:13:48 -- accel/accel.sh@20 -- # val= 00:08:33.291 21:13:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.291 21:13:48 -- accel/accel.sh@19 -- # IFS=: 00:08:33.291 21:13:48 -- accel/accel.sh@19 -- # read -r var val 00:08:33.291 21:13:48 -- accel/accel.sh@20 -- # val= 00:08:33.291 21:13:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.291 21:13:48 -- accel/accel.sh@19 -- # IFS=: 00:08:33.291 21:13:48 -- accel/accel.sh@19 -- # read -r var val 00:08:33.291 21:13:48 -- accel/accel.sh@20 -- # val= 00:08:33.291 21:13:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.291 21:13:48 -- accel/accel.sh@19 -- # IFS=: 00:08:33.291 21:13:48 -- accel/accel.sh@19 -- # read -r var val 00:08:33.291 21:13:48 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:33.291 21:13:48 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:33.291 21:13:48 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:33.291 00:08:33.291 real 0m9.652s 00:08:33.291 user 0m3.269s 00:08:33.291 sys 0m0.212s 00:08:33.291 21:13:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:33.291 21:13:48 -- common/autotest_common.sh@10 -- # set +x 00:08:33.291 ************************************ 00:08:33.291 END TEST accel_copy_crc32c_C2 00:08:33.291 ************************************ 00:08:33.552 21:13:48 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:33.552 21:13:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:33.552 21:13:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.552 21:13:48 -- common/autotest_common.sh@10 -- # set +x 00:08:33.552 ************************************ 00:08:33.552 START TEST accel_dualcast 00:08:33.552 ************************************ 00:08:33.552 21:13:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:08:33.552 21:13:48 -- accel/accel.sh@16 -- # local accel_opc 00:08:33.552 21:13:48 -- accel/accel.sh@17 -- # local accel_module 00:08:33.552 21:13:48 -- accel/accel.sh@19 -- # IFS=: 00:08:33.552 21:13:48 -- accel/accel.sh@19 -- # read -r var val 00:08:33.552 21:13:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:33.552 21:13:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:33.552 21:13:48 -- accel/accel.sh@12 -- # build_accel_config 00:08:33.552 21:13:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.552 21:13:48 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:33.552 21:13:48 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:33.552 21:13:48 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:33.552 21:13:48 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:33.552 21:13:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.552 21:13:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.552 21:13:48 -- accel/accel.sh@40 -- # local IFS=, 00:08:33.552 21:13:48 -- accel/accel.sh@41 -- # jq -r . 00:08:33.552 [2024-04-24 21:13:48.394792] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:08:33.552 [2024-04-24 21:13:48.394897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049357 ] 00:08:33.552 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.552 [2024-04-24 21:13:48.511512] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.813 [2024-04-24 21:13:48.612421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.813 [2024-04-24 21:13:48.616923] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:33.813 [2024-04-24 21:13:48.624887] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:40.402 21:13:55 -- accel/accel.sh@20 -- # val= 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val= 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val=0x1 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val= 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val= 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val=dualcast 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val= 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val=dsa 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@22 -- # accel_module=dsa 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val=32 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val=32 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val=1 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val=Yes 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val= 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:40.403 21:13:55 -- accel/accel.sh@20 -- # val= 00:08:40.403 21:13:55 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # IFS=: 00:08:40.403 21:13:55 -- accel/accel.sh@19 -- # read -r var val 00:08:43.702 21:13:58 -- accel/accel.sh@20 -- # val= 00:08:43.702 21:13:58 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # IFS=: 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # read -r var val 00:08:43.702 21:13:58 -- accel/accel.sh@20 -- # val= 00:08:43.702 21:13:58 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # IFS=: 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # read -r var val 00:08:43.702 21:13:58 -- accel/accel.sh@20 -- # val= 00:08:43.702 21:13:58 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # IFS=: 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # read -r var val 00:08:43.702 21:13:58 -- accel/accel.sh@20 -- # val= 00:08:43.702 21:13:58 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # IFS=: 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # read -r var val 00:08:43.702 21:13:58 -- accel/accel.sh@20 -- # val= 00:08:43.702 21:13:58 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # IFS=: 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # read -r var val 00:08:43.702 21:13:58 -- accel/accel.sh@20 -- # val= 00:08:43.702 21:13:58 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # IFS=: 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # read -r var val 00:08:43.702 21:13:58 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:43.702 21:13:58 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:43.702 21:13:58 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:43.702 00:08:43.702 real 0m9.686s 00:08:43.702 user 0m3.283s 00:08:43.702 sys 0m0.234s 00:08:43.702 21:13:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:43.702 21:13:58 -- common/autotest_common.sh@10 -- # set +x 00:08:43.702 ************************************ 00:08:43.702 END TEST accel_dualcast 00:08:43.702 ************************************ 00:08:43.702 21:13:58 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:43.702 21:13:58 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:43.702 21:13:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.702 21:13:58 -- common/autotest_common.sh@10 -- # set +x 00:08:43.702 ************************************ 00:08:43.702 START TEST accel_compare 00:08:43.702 ************************************ 00:08:43.702 21:13:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:08:43.702 21:13:58 -- accel/accel.sh@16 -- # local accel_opc 00:08:43.702 21:13:58 -- accel/accel.sh@17 -- # local accel_module 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # IFS=: 00:08:43.702 21:13:58 -- accel/accel.sh@19 -- # read -r var val 00:08:43.702 21:13:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:43.702 21:13:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:43.702 21:13:58 -- accel/accel.sh@12 -- # build_accel_config 00:08:43.703 21:13:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:43.703 21:13:58 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:43.703 21:13:58 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:43.703 21:13:58 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:43.703 21:13:58 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:43.703 21:13:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:43.703 21:13:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:43.703 21:13:58 -- accel/accel.sh@40 -- # local IFS=, 00:08:43.703 21:13:58 -- accel/accel.sh@41 -- # jq -r . 00:08:43.703 [2024-04-24 21:13:58.194051] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:08:43.703 [2024-04-24 21:13:58.194156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051414 ] 00:08:43.703 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.703 [2024-04-24 21:13:58.309229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.703 [2024-04-24 21:13:58.405775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.703 [2024-04-24 21:13:58.410289] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:43.703 [2024-04-24 21:13:58.418250] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val= 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val= 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val=0x1 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val= 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val= 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val=compare 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@23 -- # accel_opc=compare 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val= 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val=dsa 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@22 -- # accel_module=dsa 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val=32 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val=32 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val=1 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val=Yes 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val= 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:50.285 21:14:04 -- accel/accel.sh@20 -- # val= 00:08:50.285 21:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # IFS=: 00:08:50.285 21:14:04 -- accel/accel.sh@19 -- # read -r var val 00:08:52.858 21:14:07 -- accel/accel.sh@20 -- # val= 00:08:52.858 21:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.858 21:14:07 -- accel/accel.sh@19 -- # IFS=: 00:08:52.858 21:14:07 -- accel/accel.sh@19 -- # read -r var val 00:08:52.858 21:14:07 -- accel/accel.sh@20 -- # val= 00:08:52.858 21:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.858 21:14:07 -- accel/accel.sh@19 -- # IFS=: 00:08:52.858 21:14:07 -- accel/accel.sh@19 -- # read -r var val 00:08:52.858 21:14:07 -- accel/accel.sh@20 -- # val= 00:08:52.858 21:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.858 21:14:07 -- accel/accel.sh@19 -- # IFS=: 00:08:52.858 21:14:07 -- accel/accel.sh@19 -- # read -r var val 00:08:52.858 21:14:07 -- accel/accel.sh@20 -- # val= 00:08:52.858 21:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.858 21:14:07 -- accel/accel.sh@19 -- # IFS=: 00:08:52.858 21:14:07 -- accel/accel.sh@19 -- # read -r var val 00:08:52.858 21:14:07 -- accel/accel.sh@20 -- # val= 00:08:52.858 21:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.858 21:14:07 -- accel/accel.sh@19 -- # IFS=: 00:08:52.858 21:14:07 -- accel/accel.sh@19 -- # read -r var val 00:08:52.858 21:14:07 -- accel/accel.sh@20 -- # val= 00:08:52.858 21:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.858 21:14:07 -- accel/accel.sh@19 -- # IFS=: 00:08:52.858 21:14:07 -- accel/accel.sh@19 -- # read -r var val 00:08:52.858 21:14:07 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:52.858 21:14:07 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:52.858 21:14:07 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:52.858 00:08:52.858 real 0m9.662s 00:08:52.858 user 0m3.265s 00:08:52.858 sys 0m0.227s 00:08:52.858 21:14:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:52.858 21:14:07 -- common/autotest_common.sh@10 -- # set +x 00:08:52.858 ************************************ 00:08:52.858 END TEST accel_compare 00:08:52.858 ************************************ 00:08:53.119 21:14:07 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:53.119 21:14:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:53.119 21:14:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.119 21:14:07 -- common/autotest_common.sh@10 -- # set +x 00:08:53.119 ************************************ 00:08:53.119 START TEST accel_xor 00:08:53.119 ************************************ 00:08:53.119 21:14:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:08:53.119 21:14:07 -- accel/accel.sh@16 -- # local accel_opc 00:08:53.119 21:14:07 -- accel/accel.sh@17 -- # local accel_module 00:08:53.119 21:14:07 -- accel/accel.sh@19 -- # IFS=: 00:08:53.119 21:14:07 -- accel/accel.sh@19 -- # read -r var val 00:08:53.119 21:14:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:53.119 21:14:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:53.119 21:14:07 -- accel/accel.sh@12 -- # build_accel_config 00:08:53.119 21:14:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:53.119 21:14:07 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:53.119 21:14:07 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:53.119 21:14:07 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:53.119 21:14:07 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:53.119 21:14:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:53.119 21:14:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:53.119 21:14:07 -- accel/accel.sh@40 -- # local IFS=, 00:08:53.119 21:14:07 -- accel/accel.sh@41 -- # jq -r . 00:08:53.119 [2024-04-24 21:14:07.968318] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:08:53.119 [2024-04-24 21:14:07.968422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1053272 ] 00:08:53.119 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.119 [2024-04-24 21:14:08.083364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.379 [2024-04-24 21:14:08.177835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.379 [2024-04-24 21:14:08.182334] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:53.379 [2024-04-24 21:14:08.190308] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val= 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val= 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val=0x1 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val= 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val= 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val=xor 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@23 -- # accel_opc=xor 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val=2 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val= 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val=software 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@22 -- # accel_module=software 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val=32 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val=32 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val=1 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val=Yes 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val= 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:08:59.966 21:14:14 -- accel/accel.sh@20 -- # val= 00:08:59.966 21:14:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # IFS=: 00:08:59.966 21:14:14 -- accel/accel.sh@19 -- # read -r var val 00:09:03.267 21:14:17 -- accel/accel.sh@20 -- # val= 00:09:03.267 21:14:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # IFS=: 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # read -r var val 00:09:03.267 21:14:17 -- accel/accel.sh@20 -- # val= 00:09:03.267 21:14:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # IFS=: 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # read -r var val 00:09:03.267 21:14:17 -- accel/accel.sh@20 -- # val= 00:09:03.267 21:14:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # IFS=: 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # read -r var val 00:09:03.267 21:14:17 -- accel/accel.sh@20 -- # val= 00:09:03.267 21:14:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # IFS=: 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # read -r var val 00:09:03.267 21:14:17 -- accel/accel.sh@20 -- # val= 00:09:03.267 21:14:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # IFS=: 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # read -r var val 00:09:03.267 21:14:17 -- accel/accel.sh@20 -- # val= 00:09:03.267 21:14:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # IFS=: 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # read -r var val 00:09:03.267 21:14:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:03.267 21:14:17 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:03.267 21:14:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:03.267 00:09:03.267 real 0m9.658s 00:09:03.267 user 0m3.271s 00:09:03.267 sys 0m0.215s 00:09:03.267 21:14:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:03.267 21:14:17 -- common/autotest_common.sh@10 -- # set +x 00:09:03.267 ************************************ 00:09:03.267 END TEST accel_xor 00:09:03.267 ************************************ 00:09:03.267 21:14:17 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:03.267 21:14:17 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:03.267 21:14:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.267 21:14:17 -- common/autotest_common.sh@10 -- # set +x 00:09:03.267 ************************************ 00:09:03.267 START TEST accel_xor 00:09:03.267 ************************************ 00:09:03.267 21:14:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:09:03.267 21:14:17 -- accel/accel.sh@16 -- # local accel_opc 00:09:03.267 21:14:17 -- accel/accel.sh@17 -- # local accel_module 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # IFS=: 00:09:03.267 21:14:17 -- accel/accel.sh@19 -- # read -r var val 00:09:03.267 21:14:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:03.267 21:14:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:03.267 21:14:17 -- accel/accel.sh@12 -- # build_accel_config 00:09:03.267 21:14:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:03.267 21:14:17 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:03.267 21:14:17 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:03.267 21:14:17 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:03.267 21:14:17 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:03.267 21:14:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:03.267 21:14:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:03.267 21:14:17 -- accel/accel.sh@40 -- # local IFS=, 00:09:03.267 21:14:17 -- accel/accel.sh@41 -- # jq -r . 00:09:03.267 [2024-04-24 21:14:17.737775] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:09:03.267 [2024-04-24 21:14:17.737877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055097 ] 00:09:03.267 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.267 [2024-04-24 21:14:17.851330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.267 [2024-04-24 21:14:17.945658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.267 [2024-04-24 21:14:17.950258] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:03.267 [2024-04-24 21:14:17.958223] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val= 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val= 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val=0x1 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val= 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val= 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val=xor 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@23 -- # accel_opc=xor 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val=3 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val= 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val=software 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@22 -- # accel_module=software 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val=32 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val=32 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val=1 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val=Yes 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val= 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:09.848 21:14:24 -- accel/accel.sh@20 -- # val= 00:09:09.848 21:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # IFS=: 00:09:09.848 21:14:24 -- accel/accel.sh@19 -- # read -r var val 00:09:13.140 21:14:27 -- accel/accel.sh@20 -- # val= 00:09:13.140 21:14:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # IFS=: 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # read -r var val 00:09:13.140 21:14:27 -- accel/accel.sh@20 -- # val= 00:09:13.140 21:14:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # IFS=: 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # read -r var val 00:09:13.140 21:14:27 -- accel/accel.sh@20 -- # val= 00:09:13.140 21:14:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # IFS=: 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # read -r var val 00:09:13.140 21:14:27 -- accel/accel.sh@20 -- # val= 00:09:13.140 21:14:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # IFS=: 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # read -r var val 00:09:13.140 21:14:27 -- accel/accel.sh@20 -- # val= 00:09:13.140 21:14:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # IFS=: 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # read -r var val 00:09:13.140 21:14:27 -- accel/accel.sh@20 -- # val= 00:09:13.140 21:14:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # IFS=: 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # read -r var val 00:09:13.140 21:14:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:13.140 21:14:27 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:13.140 21:14:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:13.140 00:09:13.140 real 0m9.680s 00:09:13.140 user 0m3.271s 00:09:13.140 sys 0m0.234s 00:09:13.140 21:14:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:13.140 21:14:27 -- common/autotest_common.sh@10 -- # set +x 00:09:13.140 ************************************ 00:09:13.140 END TEST accel_xor 00:09:13.140 ************************************ 00:09:13.140 21:14:27 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:13.140 21:14:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:13.140 21:14:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.140 21:14:27 -- common/autotest_common.sh@10 -- # set +x 00:09:13.140 ************************************ 00:09:13.140 START TEST accel_dif_verify 00:09:13.140 ************************************ 00:09:13.140 21:14:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:09:13.140 21:14:27 -- accel/accel.sh@16 -- # local accel_opc 00:09:13.140 21:14:27 -- accel/accel.sh@17 -- # local accel_module 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # IFS=: 00:09:13.140 21:14:27 -- accel/accel.sh@19 -- # read -r var val 00:09:13.140 21:14:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:13.140 21:14:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:13.140 21:14:27 -- accel/accel.sh@12 -- # build_accel_config 00:09:13.140 21:14:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:13.140 21:14:27 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:13.140 21:14:27 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:13.140 21:14:27 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:13.140 21:14:27 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:13.140 21:14:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:13.140 21:14:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:13.140 21:14:27 -- accel/accel.sh@40 -- # local IFS=, 00:09:13.140 21:14:27 -- accel/accel.sh@41 -- # jq -r . 00:09:13.140 [2024-04-24 21:14:27.514351] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:09:13.140 [2024-04-24 21:14:27.514418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057065 ] 00:09:13.140 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.140 [2024-04-24 21:14:27.603799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.140 [2024-04-24 21:14:27.704802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.140 [2024-04-24 21:14:27.709314] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:13.140 [2024-04-24 21:14:27.717282] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:19.710 21:14:34 -- accel/accel.sh@20 -- # val= 00:09:19.710 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.710 21:14:34 -- accel/accel.sh@20 -- # val= 00:09:19.710 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.710 21:14:34 -- accel/accel.sh@20 -- # val=0x1 00:09:19.710 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.710 21:14:34 -- accel/accel.sh@20 -- # val= 00:09:19.710 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.710 21:14:34 -- accel/accel.sh@20 -- # val= 00:09:19.710 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.710 21:14:34 -- accel/accel.sh@20 -- # val=dif_verify 00:09:19.710 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.710 21:14:34 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.710 21:14:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:19.710 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.710 21:14:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:19.710 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.710 21:14:34 -- accel/accel.sh@20 -- # val='512 bytes' 00:09:19.710 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.710 21:14:34 -- accel/accel.sh@20 -- # val='8 bytes' 00:09:19.710 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.710 21:14:34 -- accel/accel.sh@20 -- # val= 00:09:19.710 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.710 21:14:34 -- accel/accel.sh@20 -- # val=dsa 00:09:19.710 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.710 21:14:34 -- accel/accel.sh@22 -- # accel_module=dsa 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.710 21:14:34 -- accel/accel.sh@20 -- # val=32 00:09:19.710 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.710 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.711 21:14:34 -- accel/accel.sh@20 -- # val=32 00:09:19.711 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.711 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.711 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.711 21:14:34 -- accel/accel.sh@20 -- # val=1 00:09:19.711 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.711 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.711 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.711 21:14:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:19.711 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.711 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.711 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.711 21:14:34 -- accel/accel.sh@20 -- # val=No 00:09:19.711 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.711 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.711 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.711 21:14:34 -- accel/accel.sh@20 -- # val= 00:09:19.711 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.711 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.711 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:19.711 21:14:34 -- accel/accel.sh@20 -- # val= 00:09:19.711 21:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.711 21:14:34 -- accel/accel.sh@19 -- # IFS=: 00:09:19.711 21:14:34 -- accel/accel.sh@19 -- # read -r var val 00:09:22.250 21:14:37 -- accel/accel.sh@20 -- # val= 00:09:22.250 21:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.250 21:14:37 -- accel/accel.sh@19 -- # IFS=: 00:09:22.250 21:14:37 -- accel/accel.sh@19 -- # read -r var val 00:09:22.250 21:14:37 -- accel/accel.sh@20 -- # val= 00:09:22.250 21:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.250 21:14:37 -- accel/accel.sh@19 -- # IFS=: 00:09:22.250 21:14:37 -- accel/accel.sh@19 -- # read -r var val 00:09:22.250 21:14:37 -- accel/accel.sh@20 -- # val= 00:09:22.250 21:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.250 21:14:37 -- accel/accel.sh@19 -- # IFS=: 00:09:22.250 21:14:37 -- accel/accel.sh@19 -- # read -r var val 00:09:22.250 21:14:37 -- accel/accel.sh@20 -- # val= 00:09:22.250 21:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.250 21:14:37 -- accel/accel.sh@19 -- # IFS=: 00:09:22.250 21:14:37 -- accel/accel.sh@19 -- # read -r var val 00:09:22.250 21:14:37 -- accel/accel.sh@20 -- # val= 00:09:22.250 21:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.250 21:14:37 -- accel/accel.sh@19 -- # IFS=: 00:09:22.250 21:14:37 -- accel/accel.sh@19 -- # read -r var val 00:09:22.250 21:14:37 -- accel/accel.sh@20 -- # val= 00:09:22.250 21:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.250 21:14:37 -- accel/accel.sh@19 -- # IFS=: 00:09:22.250 21:14:37 -- accel/accel.sh@19 -- # read -r var val 00:09:22.250 21:14:37 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:09:22.250 21:14:37 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:09:22.250 21:14:37 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:09:22.250 00:09:22.250 real 0m9.632s 00:09:22.250 user 0m3.257s 00:09:22.250 sys 0m0.199s 00:09:22.250 21:14:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:22.250 21:14:37 -- common/autotest_common.sh@10 -- # set +x 00:09:22.250 ************************************ 00:09:22.250 END TEST accel_dif_verify 00:09:22.250 ************************************ 00:09:22.250 21:14:37 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:22.250 21:14:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:22.250 21:14:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.250 21:14:37 -- common/autotest_common.sh@10 -- # set +x 00:09:22.509 ************************************ 00:09:22.509 START TEST accel_dif_generate 00:09:22.509 ************************************ 00:09:22.509 21:14:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:09:22.509 21:14:37 -- accel/accel.sh@16 -- # local accel_opc 00:09:22.509 21:14:37 -- accel/accel.sh@17 -- # local accel_module 00:09:22.509 21:14:37 -- accel/accel.sh@19 -- # IFS=: 00:09:22.509 21:14:37 -- accel/accel.sh@19 -- # read -r var val 00:09:22.509 21:14:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:22.509 21:14:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:22.509 21:14:37 -- accel/accel.sh@12 -- # build_accel_config 00:09:22.509 21:14:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:22.509 21:14:37 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:22.509 21:14:37 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:22.509 21:14:37 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:22.509 21:14:37 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:22.509 21:14:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:22.509 21:14:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:22.509 21:14:37 -- accel/accel.sh@40 -- # local IFS=, 00:09:22.509 21:14:37 -- accel/accel.sh@41 -- # jq -r . 00:09:22.509 [2024-04-24 21:14:37.255894] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:09:22.510 [2024-04-24 21:14:37.255992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059014 ] 00:09:22.510 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.510 [2024-04-24 21:14:37.366093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.510 [2024-04-24 21:14:37.459965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.510 [2024-04-24 21:14:37.464426] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:22.510 [2024-04-24 21:14:37.472392] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val= 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val= 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val=0x1 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val= 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val= 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val=dif_generate 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val='512 bytes' 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val='8 bytes' 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val= 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val=software 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@22 -- # accel_module=software 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val=32 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val=32 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val=1 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val=No 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val= 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 21:14:43 -- accel/accel.sh@20 -- # val= 00:09:29.168 21:14:43 -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 21:14:43 -- accel/accel.sh@19 -- # read -r var val 00:09:32.459 21:14:46 -- accel/accel.sh@20 -- # val= 00:09:32.459 21:14:46 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # IFS=: 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # read -r var val 00:09:32.459 21:14:46 -- accel/accel.sh@20 -- # val= 00:09:32.459 21:14:46 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # IFS=: 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # read -r var val 00:09:32.459 21:14:46 -- accel/accel.sh@20 -- # val= 00:09:32.459 21:14:46 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # IFS=: 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # read -r var val 00:09:32.459 21:14:46 -- accel/accel.sh@20 -- # val= 00:09:32.459 21:14:46 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # IFS=: 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # read -r var val 00:09:32.459 21:14:46 -- accel/accel.sh@20 -- # val= 00:09:32.459 21:14:46 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # IFS=: 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # read -r var val 00:09:32.459 21:14:46 -- accel/accel.sh@20 -- # val= 00:09:32.459 21:14:46 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # IFS=: 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # read -r var val 00:09:32.459 21:14:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:32.459 21:14:46 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:09:32.459 21:14:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:32.459 00:09:32.459 real 0m9.661s 00:09:32.459 user 0m3.272s 00:09:32.459 sys 0m0.223s 00:09:32.459 21:14:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:32.459 21:14:46 -- common/autotest_common.sh@10 -- # set +x 00:09:32.459 ************************************ 00:09:32.459 END TEST accel_dif_generate 00:09:32.459 ************************************ 00:09:32.459 21:14:46 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:32.459 21:14:46 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:32.459 21:14:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.459 21:14:46 -- common/autotest_common.sh@10 -- # set +x 00:09:32.459 ************************************ 00:09:32.459 START TEST accel_dif_generate_copy 00:09:32.459 ************************************ 00:09:32.459 21:14:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:09:32.459 21:14:46 -- accel/accel.sh@16 -- # local accel_opc 00:09:32.459 21:14:46 -- accel/accel.sh@17 -- # local accel_module 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # IFS=: 00:09:32.459 21:14:46 -- accel/accel.sh@19 -- # read -r var val 00:09:32.459 21:14:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:32.459 21:14:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:32.459 21:14:46 -- accel/accel.sh@12 -- # build_accel_config 00:09:32.459 21:14:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:32.459 21:14:46 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:32.459 21:14:46 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:32.459 21:14:46 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:32.459 21:14:46 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:32.459 21:14:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:32.459 21:14:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:32.459 21:14:46 -- accel/accel.sh@40 -- # local IFS=, 00:09:32.459 21:14:46 -- accel/accel.sh@41 -- # jq -r . 00:09:32.459 [2024-04-24 21:14:47.027378] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:09:32.459 [2024-04-24 21:14:47.027481] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060844 ] 00:09:32.459 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.459 [2024-04-24 21:14:47.143497] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.459 [2024-04-24 21:14:47.243456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.459 [2024-04-24 21:14:47.247948] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:32.459 [2024-04-24 21:14:47.255914] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val= 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val= 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val=0x1 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val= 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val= 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val= 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val=dsa 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@22 -- # accel_module=dsa 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val=32 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val=32 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val=1 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val=No 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val= 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:39.037 21:14:53 -- accel/accel.sh@20 -- # val= 00:09:39.037 21:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # IFS=: 00:09:39.037 21:14:53 -- accel/accel.sh@19 -- # read -r var val 00:09:42.335 21:14:56 -- accel/accel.sh@20 -- # val= 00:09:42.335 21:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # IFS=: 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # read -r var val 00:09:42.335 21:14:56 -- accel/accel.sh@20 -- # val= 00:09:42.335 21:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # IFS=: 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # read -r var val 00:09:42.335 21:14:56 -- accel/accel.sh@20 -- # val= 00:09:42.335 21:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # IFS=: 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # read -r var val 00:09:42.335 21:14:56 -- accel/accel.sh@20 -- # val= 00:09:42.335 21:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # IFS=: 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # read -r var val 00:09:42.335 21:14:56 -- accel/accel.sh@20 -- # val= 00:09:42.335 21:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # IFS=: 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # read -r var val 00:09:42.335 21:14:56 -- accel/accel.sh@20 -- # val= 00:09:42.335 21:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # IFS=: 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # read -r var val 00:09:42.335 21:14:56 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:09:42.335 21:14:56 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:09:42.335 21:14:56 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:09:42.335 00:09:42.335 real 0m9.667s 00:09:42.335 user 0m3.264s 00:09:42.335 sys 0m0.234s 00:09:42.335 21:14:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:42.335 21:14:56 -- common/autotest_common.sh@10 -- # set +x 00:09:42.335 ************************************ 00:09:42.335 END TEST accel_dif_generate_copy 00:09:42.335 ************************************ 00:09:42.335 21:14:56 -- accel/accel.sh@115 -- # [[ y == y ]] 00:09:42.335 21:14:56 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:42.335 21:14:56 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:42.335 21:14:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:42.335 21:14:56 -- common/autotest_common.sh@10 -- # set +x 00:09:42.335 ************************************ 00:09:42.335 START TEST accel_comp 00:09:42.335 ************************************ 00:09:42.335 21:14:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:42.335 21:14:56 -- accel/accel.sh@16 -- # local accel_opc 00:09:42.335 21:14:56 -- accel/accel.sh@17 -- # local accel_module 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # IFS=: 00:09:42.335 21:14:56 -- accel/accel.sh@19 -- # read -r var val 00:09:42.335 21:14:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:42.335 21:14:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:42.335 21:14:56 -- accel/accel.sh@12 -- # build_accel_config 00:09:42.335 21:14:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:42.335 21:14:56 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:42.335 21:14:56 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:42.335 21:14:56 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:42.335 21:14:56 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:42.335 21:14:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:42.335 21:14:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:42.335 21:14:56 -- accel/accel.sh@40 -- # local IFS=, 00:09:42.335 21:14:56 -- accel/accel.sh@41 -- # jq -r . 00:09:42.335 [2024-04-24 21:14:56.799671] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:09:42.336 [2024-04-24 21:14:56.799777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062934 ] 00:09:42.336 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.336 [2024-04-24 21:14:56.914125] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.336 [2024-04-24 21:14:57.004247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.336 [2024-04-24 21:14:57.008726] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:42.336 [2024-04-24 21:14:57.016692] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val= 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val= 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val= 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val=0x1 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val= 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val= 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val=compress 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@23 -- # accel_opc=compress 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val= 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val=iaa 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@22 -- # accel_module=iaa 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val=32 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val=32 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val=1 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val=No 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val= 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:48.908 21:15:03 -- accel/accel.sh@20 -- # val= 00:09:48.908 21:15:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # IFS=: 00:09:48.908 21:15:03 -- accel/accel.sh@19 -- # read -r var val 00:09:51.447 21:15:06 -- accel/accel.sh@20 -- # val= 00:09:51.447 21:15:06 -- accel/accel.sh@21 -- # case "$var" in 00:09:51.447 21:15:06 -- accel/accel.sh@19 -- # IFS=: 00:09:51.447 21:15:06 -- accel/accel.sh@19 -- # read -r var val 00:09:51.447 21:15:06 -- accel/accel.sh@20 -- # val= 00:09:51.447 21:15:06 -- accel/accel.sh@21 -- # case "$var" in 00:09:51.447 21:15:06 -- accel/accel.sh@19 -- # IFS=: 00:09:51.447 21:15:06 -- accel/accel.sh@19 -- # read -r var val 00:09:51.447 21:15:06 -- accel/accel.sh@20 -- # val= 00:09:51.447 21:15:06 -- accel/accel.sh@21 -- # case "$var" in 00:09:51.447 21:15:06 -- accel/accel.sh@19 -- # IFS=: 00:09:51.447 21:15:06 -- accel/accel.sh@19 -- # read -r var val 00:09:51.447 21:15:06 -- accel/accel.sh@20 -- # val= 00:09:51.447 21:15:06 -- accel/accel.sh@21 -- # case "$var" in 00:09:51.447 21:15:06 -- accel/accel.sh@19 -- # IFS=: 00:09:51.447 21:15:06 -- accel/accel.sh@19 -- # read -r var val 00:09:51.447 21:15:06 -- accel/accel.sh@20 -- # val= 00:09:51.447 21:15:06 -- accel/accel.sh@21 -- # case "$var" in 00:09:51.447 21:15:06 -- accel/accel.sh@19 -- # IFS=: 00:09:51.447 21:15:06 -- accel/accel.sh@19 -- # read -r var val 00:09:51.447 21:15:06 -- accel/accel.sh@20 -- # val= 00:09:51.447 21:15:06 -- accel/accel.sh@21 -- # case "$var" in 00:09:51.447 21:15:06 -- accel/accel.sh@19 -- # IFS=: 00:09:51.447 21:15:06 -- accel/accel.sh@19 -- # read -r var val 00:09:51.708 21:15:06 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:09:51.708 21:15:06 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:09:51.708 21:15:06 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:09:51.708 00:09:51.708 real 0m9.652s 00:09:51.708 user 0m3.261s 00:09:51.708 sys 0m0.222s 00:09:51.708 21:15:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:51.708 21:15:06 -- common/autotest_common.sh@10 -- # set +x 00:09:51.708 ************************************ 00:09:51.708 END TEST accel_comp 00:09:51.708 ************************************ 00:09:51.708 21:15:06 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:09:51.708 21:15:06 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:51.708 21:15:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.708 21:15:06 -- common/autotest_common.sh@10 -- # set +x 00:09:51.708 ************************************ 00:09:51.708 START TEST accel_decomp 00:09:51.708 ************************************ 00:09:51.708 21:15:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:09:51.708 21:15:06 -- accel/accel.sh@16 -- # local accel_opc 00:09:51.708 21:15:06 -- accel/accel.sh@17 -- # local accel_module 00:09:51.708 21:15:06 -- accel/accel.sh@19 -- # IFS=: 00:09:51.708 21:15:06 -- accel/accel.sh@19 -- # read -r var val 00:09:51.708 21:15:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:09:51.708 21:15:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:09:51.708 21:15:06 -- accel/accel.sh@12 -- # build_accel_config 00:09:51.708 21:15:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:51.708 21:15:06 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:51.708 21:15:06 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:51.708 21:15:06 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:51.708 21:15:06 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:51.708 21:15:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:51.708 21:15:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:51.708 21:15:06 -- accel/accel.sh@40 -- # local IFS=, 00:09:51.708 21:15:06 -- accel/accel.sh@41 -- # jq -r . 00:09:51.708 [2024-04-24 21:15:06.560906] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:09:51.708 [2024-04-24 21:15:06.561011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065084 ] 00:09:51.708 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.966 [2024-04-24 21:15:06.677622] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.966 [2024-04-24 21:15:06.774346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.966 [2024-04-24 21:15:06.778822] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:51.966 [2024-04-24 21:15:06.786791] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:58.543 21:15:13 -- accel/accel.sh@20 -- # val= 00:09:58.543 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.543 21:15:13 -- accel/accel.sh@20 -- # val= 00:09:58.543 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.543 21:15:13 -- accel/accel.sh@20 -- # val= 00:09:58.543 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.543 21:15:13 -- accel/accel.sh@20 -- # val=0x1 00:09:58.543 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.543 21:15:13 -- accel/accel.sh@20 -- # val= 00:09:58.543 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.543 21:15:13 -- accel/accel.sh@20 -- # val= 00:09:58.543 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.543 21:15:13 -- accel/accel.sh@20 -- # val=decompress 00:09:58.543 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.543 21:15:13 -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.543 21:15:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:58.543 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.543 21:15:13 -- accel/accel.sh@20 -- # val= 00:09:58.543 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.543 21:15:13 -- accel/accel.sh@20 -- # val=iaa 00:09:58.543 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.543 21:15:13 -- accel/accel.sh@22 -- # accel_module=iaa 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.543 21:15:13 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:58.543 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.543 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.543 21:15:13 -- accel/accel.sh@20 -- # val=32 00:09:58.543 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.544 21:15:13 -- accel/accel.sh@20 -- # val=32 00:09:58.544 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.544 21:15:13 -- accel/accel.sh@20 -- # val=1 00:09:58.544 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.544 21:15:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:58.544 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.544 21:15:13 -- accel/accel.sh@20 -- # val=Yes 00:09:58.544 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.544 21:15:13 -- accel/accel.sh@20 -- # val= 00:09:58.544 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.544 21:15:13 -- accel/accel.sh@20 -- # val= 00:09:58.544 21:15:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # IFS=: 00:09:58.544 21:15:13 -- accel/accel.sh@19 -- # read -r var val 00:10:01.835 21:15:16 -- accel/accel.sh@20 -- # val= 00:10:01.836 21:15:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # IFS=: 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # read -r var val 00:10:01.836 21:15:16 -- accel/accel.sh@20 -- # val= 00:10:01.836 21:15:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # IFS=: 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # read -r var val 00:10:01.836 21:15:16 -- accel/accel.sh@20 -- # val= 00:10:01.836 21:15:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # IFS=: 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # read -r var val 00:10:01.836 21:15:16 -- accel/accel.sh@20 -- # val= 00:10:01.836 21:15:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # IFS=: 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # read -r var val 00:10:01.836 21:15:16 -- accel/accel.sh@20 -- # val= 00:10:01.836 21:15:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # IFS=: 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # read -r var val 00:10:01.836 21:15:16 -- accel/accel.sh@20 -- # val= 00:10:01.836 21:15:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # IFS=: 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # read -r var val 00:10:01.836 21:15:16 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:01.836 21:15:16 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:01.836 21:15:16 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:01.836 00:10:01.836 real 0m9.676s 00:10:01.836 user 0m3.257s 00:10:01.836 sys 0m0.247s 00:10:01.836 21:15:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:01.836 21:15:16 -- common/autotest_common.sh@10 -- # set +x 00:10:01.836 ************************************ 00:10:01.836 END TEST accel_decomp 00:10:01.836 ************************************ 00:10:01.836 21:15:16 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:01.836 21:15:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:01.836 21:15:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:01.836 21:15:16 -- common/autotest_common.sh@10 -- # set +x 00:10:01.836 ************************************ 00:10:01.836 START TEST accel_decmop_full 00:10:01.836 ************************************ 00:10:01.836 21:15:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:01.836 21:15:16 -- accel/accel.sh@16 -- # local accel_opc 00:10:01.836 21:15:16 -- accel/accel.sh@17 -- # local accel_module 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # IFS=: 00:10:01.836 21:15:16 -- accel/accel.sh@19 -- # read -r var val 00:10:01.836 21:15:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:01.836 21:15:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:01.836 21:15:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:01.836 21:15:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:01.836 21:15:16 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:01.836 21:15:16 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:01.836 21:15:16 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:01.836 21:15:16 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:01.836 21:15:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.836 21:15:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:01.836 21:15:16 -- accel/accel.sh@40 -- # local IFS=, 00:10:01.836 21:15:16 -- accel/accel.sh@41 -- # jq -r . 00:10:01.836 [2024-04-24 21:15:16.348104] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:10:01.836 [2024-04-24 21:15:16.348205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067337 ] 00:10:01.836 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.836 [2024-04-24 21:15:16.460456] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.836 [2024-04-24 21:15:16.555332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.836 [2024-04-24 21:15:16.559812] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:01.836 [2024-04-24 21:15:16.567778] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val= 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val= 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val= 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val=0x1 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val= 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val= 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val=decompress 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val= 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val=iaa 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@22 -- # accel_module=iaa 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val=32 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val=32 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val=1 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.414 21:15:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:08.414 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.414 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.415 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.415 21:15:22 -- accel/accel.sh@20 -- # val=Yes 00:10:08.415 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.415 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.415 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.415 21:15:22 -- accel/accel.sh@20 -- # val= 00:10:08.415 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.415 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.415 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:08.415 21:15:22 -- accel/accel.sh@20 -- # val= 00:10:08.415 21:15:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:08.415 21:15:22 -- accel/accel.sh@19 -- # IFS=: 00:10:08.415 21:15:22 -- accel/accel.sh@19 -- # read -r var val 00:10:11.708 21:15:25 -- accel/accel.sh@20 -- # val= 00:10:11.708 21:15:25 -- accel/accel.sh@21 -- # case "$var" in 00:10:11.708 21:15:25 -- accel/accel.sh@19 -- # IFS=: 00:10:11.708 21:15:25 -- accel/accel.sh@19 -- # read -r var val 00:10:11.708 21:15:25 -- accel/accel.sh@20 -- # val= 00:10:11.708 21:15:25 -- accel/accel.sh@21 -- # case "$var" in 00:10:11.708 21:15:25 -- accel/accel.sh@19 -- # IFS=: 00:10:11.708 21:15:25 -- accel/accel.sh@19 -- # read -r var val 00:10:11.708 21:15:25 -- accel/accel.sh@20 -- # val= 00:10:11.708 21:15:25 -- accel/accel.sh@21 -- # case "$var" in 00:10:11.708 21:15:25 -- accel/accel.sh@19 -- # IFS=: 00:10:11.708 21:15:25 -- accel/accel.sh@19 -- # read -r var val 00:10:11.708 21:15:25 -- accel/accel.sh@20 -- # val= 00:10:11.708 21:15:25 -- accel/accel.sh@21 -- # case "$var" in 00:10:11.708 21:15:25 -- accel/accel.sh@19 -- # IFS=: 00:10:11.708 21:15:25 -- accel/accel.sh@19 -- # read -r var val 00:10:11.708 21:15:25 -- accel/accel.sh@20 -- # val= 00:10:11.708 21:15:25 -- accel/accel.sh@21 -- # case "$var" in 00:10:11.708 21:15:25 -- accel/accel.sh@19 -- # IFS=: 00:10:11.708 21:15:25 -- accel/accel.sh@19 -- # read -r var val 00:10:11.708 21:15:25 -- accel/accel.sh@20 -- # val= 00:10:11.708 21:15:25 -- accel/accel.sh@21 -- # case "$var" in 00:10:11.708 21:15:25 -- accel/accel.sh@19 -- # IFS=: 00:10:11.708 21:15:25 -- accel/accel.sh@19 -- # read -r var val 00:10:11.708 21:15:25 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:11.708 21:15:25 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:11.708 21:15:25 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:11.708 00:10:11.708 real 0m9.671s 00:10:11.708 user 0m3.279s 00:10:11.708 sys 0m0.221s 00:10:11.708 21:15:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:11.708 21:15:25 -- common/autotest_common.sh@10 -- # set +x 00:10:11.708 ************************************ 00:10:11.708 END TEST accel_decmop_full 00:10:11.708 ************************************ 00:10:11.708 21:15:26 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:11.708 21:15:26 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:11.708 21:15:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:11.708 21:15:26 -- common/autotest_common.sh@10 -- # set +x 00:10:11.708 ************************************ 00:10:11.708 START TEST accel_decomp_mcore 00:10:11.708 ************************************ 00:10:11.708 21:15:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:11.708 21:15:26 -- accel/accel.sh@16 -- # local accel_opc 00:10:11.708 21:15:26 -- accel/accel.sh@17 -- # local accel_module 00:10:11.708 21:15:26 -- accel/accel.sh@19 -- # IFS=: 00:10:11.708 21:15:26 -- accel/accel.sh@19 -- # read -r var val 00:10:11.708 21:15:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:11.708 21:15:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:11.708 21:15:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:11.708 21:15:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:11.708 21:15:26 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:11.708 21:15:26 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:11.708 21:15:26 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:11.708 21:15:26 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:11.708 21:15:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.708 21:15:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:11.708 21:15:26 -- accel/accel.sh@40 -- # local IFS=, 00:10:11.708 21:15:26 -- accel/accel.sh@41 -- # jq -r . 00:10:11.708 [2024-04-24 21:15:26.127581] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:10:11.708 [2024-04-24 21:15:26.127687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1069203 ] 00:10:11.708 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.708 [2024-04-24 21:15:26.243199] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.708 [2024-04-24 21:15:26.342241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.708 [2024-04-24 21:15:26.342345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.708 [2024-04-24 21:15:26.342595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.708 [2024-04-24 21:15:26.342599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.708 [2024-04-24 21:15:26.347138] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:11.708 [2024-04-24 21:15:26.355109] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val= 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val= 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val= 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val=0xf 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val= 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val= 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val=decompress 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val= 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val=iaa 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@22 -- # accel_module=iaa 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val=32 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val=32 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val=1 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val=Yes 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val= 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:18.272 21:15:32 -- accel/accel.sh@20 -- # val= 00:10:18.272 21:15:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # IFS=: 00:10:18.272 21:15:32 -- accel/accel.sh@19 -- # read -r var val 00:10:21.628 21:15:35 -- accel/accel.sh@20 -- # val= 00:10:21.628 21:15:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.628 21:15:35 -- accel/accel.sh@19 -- # IFS=: 00:10:21.628 21:15:35 -- accel/accel.sh@19 -- # read -r var val 00:10:21.628 21:15:35 -- accel/accel.sh@20 -- # val= 00:10:21.628 21:15:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.628 21:15:35 -- accel/accel.sh@19 -- # IFS=: 00:10:21.628 21:15:35 -- accel/accel.sh@19 -- # read -r var val 00:10:21.628 21:15:35 -- accel/accel.sh@20 -- # val= 00:10:21.628 21:15:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # IFS=: 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # read -r var val 00:10:21.629 21:15:35 -- accel/accel.sh@20 -- # val= 00:10:21.629 21:15:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # IFS=: 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # read -r var val 00:10:21.629 21:15:35 -- accel/accel.sh@20 -- # val= 00:10:21.629 21:15:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # IFS=: 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # read -r var val 00:10:21.629 21:15:35 -- accel/accel.sh@20 -- # val= 00:10:21.629 21:15:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # IFS=: 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # read -r var val 00:10:21.629 21:15:35 -- accel/accel.sh@20 -- # val= 00:10:21.629 21:15:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # IFS=: 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # read -r var val 00:10:21.629 21:15:35 -- accel/accel.sh@20 -- # val= 00:10:21.629 21:15:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # IFS=: 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # read -r var val 00:10:21.629 21:15:35 -- accel/accel.sh@20 -- # val= 00:10:21.629 21:15:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # IFS=: 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # read -r var val 00:10:21.629 21:15:35 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:21.629 21:15:35 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:21.629 21:15:35 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:21.629 00:10:21.629 real 0m9.708s 00:10:21.629 user 0m0.007s 00:10:21.629 sys 0m0.000s 00:10:21.629 21:15:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:21.629 21:15:35 -- common/autotest_common.sh@10 -- # set +x 00:10:21.629 ************************************ 00:10:21.629 END TEST accel_decomp_mcore 00:10:21.629 ************************************ 00:10:21.629 21:15:35 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:21.629 21:15:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:21.629 21:15:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:21.629 21:15:35 -- common/autotest_common.sh@10 -- # set +x 00:10:21.629 ************************************ 00:10:21.629 START TEST accel_decomp_full_mcore 00:10:21.629 ************************************ 00:10:21.629 21:15:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:21.629 21:15:35 -- accel/accel.sh@16 -- # local accel_opc 00:10:21.629 21:15:35 -- accel/accel.sh@17 -- # local accel_module 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # IFS=: 00:10:21.629 21:15:35 -- accel/accel.sh@19 -- # read -r var val 00:10:21.629 21:15:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:21.629 21:15:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:21.629 21:15:35 -- accel/accel.sh@12 -- # build_accel_config 00:10:21.629 21:15:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:21.629 21:15:35 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:21.629 21:15:35 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:21.629 21:15:35 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:21.629 21:15:35 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:21.629 21:15:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.629 21:15:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:21.629 21:15:35 -- accel/accel.sh@40 -- # local IFS=, 00:10:21.629 21:15:35 -- accel/accel.sh@41 -- # jq -r . 00:10:21.629 [2024-04-24 21:15:35.949928] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:10:21.629 [2024-04-24 21:15:35.950072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1071043 ] 00:10:21.629 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.629 [2024-04-24 21:15:36.067316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.629 [2024-04-24 21:15:36.168162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.629 [2024-04-24 21:15:36.168192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.629 [2024-04-24 21:15:36.168313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.629 [2024-04-24 21:15:36.168319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.629 [2024-04-24 21:15:36.172873] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:21.629 [2024-04-24 21:15:36.180837] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:28.217 21:15:42 -- accel/accel.sh@20 -- # val= 00:10:28.217 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.217 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.217 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.217 21:15:42 -- accel/accel.sh@20 -- # val= 00:10:28.217 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val= 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val=0xf 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val= 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val= 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val=decompress 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val= 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val=iaa 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@22 -- # accel_module=iaa 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val=32 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val=32 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val=1 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val=Yes 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val= 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:28.218 21:15:42 -- accel/accel.sh@20 -- # val= 00:10:28.218 21:15:42 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # IFS=: 00:10:28.218 21:15:42 -- accel/accel.sh@19 -- # read -r var val 00:10:30.762 21:15:45 -- accel/accel.sh@20 -- # val= 00:10:30.762 21:15:45 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # IFS=: 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # read -r var val 00:10:30.762 21:15:45 -- accel/accel.sh@20 -- # val= 00:10:30.762 21:15:45 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # IFS=: 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # read -r var val 00:10:30.762 21:15:45 -- accel/accel.sh@20 -- # val= 00:10:30.762 21:15:45 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # IFS=: 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # read -r var val 00:10:30.762 21:15:45 -- accel/accel.sh@20 -- # val= 00:10:30.762 21:15:45 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # IFS=: 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # read -r var val 00:10:30.762 21:15:45 -- accel/accel.sh@20 -- # val= 00:10:30.762 21:15:45 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # IFS=: 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # read -r var val 00:10:30.762 21:15:45 -- accel/accel.sh@20 -- # val= 00:10:30.762 21:15:45 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # IFS=: 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # read -r var val 00:10:30.762 21:15:45 -- accel/accel.sh@20 -- # val= 00:10:30.762 21:15:45 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # IFS=: 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # read -r var val 00:10:30.762 21:15:45 -- accel/accel.sh@20 -- # val= 00:10:30.762 21:15:45 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # IFS=: 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # read -r var val 00:10:30.762 21:15:45 -- accel/accel.sh@20 -- # val= 00:10:30.762 21:15:45 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # IFS=: 00:10:30.762 21:15:45 -- accel/accel.sh@19 -- # read -r var val 00:10:30.762 21:15:45 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:30.762 21:15:45 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:30.762 21:15:45 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:30.762 00:10:30.762 real 0m9.731s 00:10:30.762 user 0m0.004s 00:10:30.762 sys 0m0.003s 00:10:30.762 21:15:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:30.762 21:15:45 -- common/autotest_common.sh@10 -- # set +x 00:10:30.762 ************************************ 00:10:30.762 END TEST accel_decomp_full_mcore 00:10:30.762 ************************************ 00:10:30.762 21:15:45 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:30.762 21:15:45 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:30.762 21:15:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:30.762 21:15:45 -- common/autotest_common.sh@10 -- # set +x 00:10:31.022 ************************************ 00:10:31.022 START TEST accel_decomp_mthread 00:10:31.022 ************************************ 00:10:31.022 21:15:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:31.022 21:15:45 -- accel/accel.sh@16 -- # local accel_opc 00:10:31.022 21:15:45 -- accel/accel.sh@17 -- # local accel_module 00:10:31.022 21:15:45 -- accel/accel.sh@19 -- # IFS=: 00:10:31.022 21:15:45 -- accel/accel.sh@19 -- # read -r var val 00:10:31.022 21:15:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:31.022 21:15:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:31.022 21:15:45 -- accel/accel.sh@12 -- # build_accel_config 00:10:31.022 21:15:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:31.022 21:15:45 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:31.022 21:15:45 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:31.022 21:15:45 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:31.023 21:15:45 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:31.023 21:15:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.023 21:15:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:31.023 21:15:45 -- accel/accel.sh@40 -- # local IFS=, 00:10:31.023 21:15:45 -- accel/accel.sh@41 -- # jq -r . 00:10:31.023 [2024-04-24 21:15:45.786311] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:10:31.023 [2024-04-24 21:15:45.786421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1073021 ] 00:10:31.023 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.023 [2024-04-24 21:15:45.896043] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.283 [2024-04-24 21:15:45.992679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.283 [2024-04-24 21:15:45.997170] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:31.283 [2024-04-24 21:15:46.005150] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val= 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val= 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val= 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val=0x1 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val= 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val= 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val=decompress 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val= 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val=iaa 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@22 -- # accel_module=iaa 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val=32 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val=32 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val=2 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val=Yes 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val= 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:37.857 21:15:52 -- accel/accel.sh@20 -- # val= 00:10:37.857 21:15:52 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # IFS=: 00:10:37.857 21:15:52 -- accel/accel.sh@19 -- # read -r var val 00:10:41.149 21:15:55 -- accel/accel.sh@20 -- # val= 00:10:41.149 21:15:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # IFS=: 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # read -r var val 00:10:41.149 21:15:55 -- accel/accel.sh@20 -- # val= 00:10:41.149 21:15:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # IFS=: 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # read -r var val 00:10:41.149 21:15:55 -- accel/accel.sh@20 -- # val= 00:10:41.149 21:15:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # IFS=: 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # read -r var val 00:10:41.149 21:15:55 -- accel/accel.sh@20 -- # val= 00:10:41.149 21:15:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # IFS=: 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # read -r var val 00:10:41.149 21:15:55 -- accel/accel.sh@20 -- # val= 00:10:41.149 21:15:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # IFS=: 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # read -r var val 00:10:41.149 21:15:55 -- accel/accel.sh@20 -- # val= 00:10:41.149 21:15:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # IFS=: 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # read -r var val 00:10:41.149 21:15:55 -- accel/accel.sh@20 -- # val= 00:10:41.149 21:15:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # IFS=: 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # read -r var val 00:10:41.149 21:15:55 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:41.149 21:15:55 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:41.149 21:15:55 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:41.149 00:10:41.149 real 0m9.681s 00:10:41.149 user 0m3.279s 00:10:41.149 sys 0m0.232s 00:10:41.149 21:15:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:41.149 21:15:55 -- common/autotest_common.sh@10 -- # set +x 00:10:41.149 ************************************ 00:10:41.149 END TEST accel_decomp_mthread 00:10:41.149 ************************************ 00:10:41.149 21:15:55 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:41.149 21:15:55 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:41.149 21:15:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:41.149 21:15:55 -- common/autotest_common.sh@10 -- # set +x 00:10:41.149 ************************************ 00:10:41.149 START TEST accel_deomp_full_mthread 00:10:41.149 ************************************ 00:10:41.149 21:15:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:41.149 21:15:55 -- accel/accel.sh@16 -- # local accel_opc 00:10:41.149 21:15:55 -- accel/accel.sh@17 -- # local accel_module 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # IFS=: 00:10:41.149 21:15:55 -- accel/accel.sh@19 -- # read -r var val 00:10:41.149 21:15:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:41.149 21:15:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:41.149 21:15:55 -- accel/accel.sh@12 -- # build_accel_config 00:10:41.149 21:15:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:41.149 21:15:55 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:41.149 21:15:55 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:41.149 21:15:55 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:41.149 21:15:55 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:41.149 21:15:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.149 21:15:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:41.149 21:15:55 -- accel/accel.sh@40 -- # local IFS=, 00:10:41.149 21:15:55 -- accel/accel.sh@41 -- # jq -r . 00:10:41.149 [2024-04-24 21:15:55.572953] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:10:41.149 [2024-04-24 21:15:55.573052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1074950 ] 00:10:41.149 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.149 [2024-04-24 21:15:55.683810] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.149 [2024-04-24 21:15:55.778314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.149 [2024-04-24 21:15:55.782771] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:41.149 [2024-04-24 21:15:55.790739] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:47.726 21:16:02 -- accel/accel.sh@20 -- # val= 00:10:47.726 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.726 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.726 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.726 21:16:02 -- accel/accel.sh@20 -- # val= 00:10:47.726 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.726 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.726 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.726 21:16:02 -- accel/accel.sh@20 -- # val= 00:10:47.726 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.726 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.726 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.726 21:16:02 -- accel/accel.sh@20 -- # val=0x1 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val= 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val= 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val=decompress 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val= 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val=iaa 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@22 -- # accel_module=iaa 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val=32 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val=32 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val=2 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val=Yes 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val= 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:47.727 21:16:02 -- accel/accel.sh@20 -- # val= 00:10:47.727 21:16:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # IFS=: 00:10:47.727 21:16:02 -- accel/accel.sh@19 -- # read -r var val 00:10:50.460 21:16:05 -- accel/accel.sh@20 -- # val= 00:10:50.460 21:16:05 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # IFS=: 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # read -r var val 00:10:50.460 21:16:05 -- accel/accel.sh@20 -- # val= 00:10:50.460 21:16:05 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # IFS=: 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # read -r var val 00:10:50.460 21:16:05 -- accel/accel.sh@20 -- # val= 00:10:50.460 21:16:05 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # IFS=: 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # read -r var val 00:10:50.460 21:16:05 -- accel/accel.sh@20 -- # val= 00:10:50.460 21:16:05 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # IFS=: 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # read -r var val 00:10:50.460 21:16:05 -- accel/accel.sh@20 -- # val= 00:10:50.460 21:16:05 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # IFS=: 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # read -r var val 00:10:50.460 21:16:05 -- accel/accel.sh@20 -- # val= 00:10:50.460 21:16:05 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # IFS=: 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # read -r var val 00:10:50.460 21:16:05 -- accel/accel.sh@20 -- # val= 00:10:50.460 21:16:05 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # IFS=: 00:10:50.460 21:16:05 -- accel/accel.sh@19 -- # read -r var val 00:10:50.460 21:16:05 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:50.460 21:16:05 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:50.460 21:16:05 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:50.460 00:10:50.460 real 0m9.686s 00:10:50.460 user 0m3.278s 00:10:50.460 sys 0m0.225s 00:10:50.460 21:16:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:50.460 21:16:05 -- common/autotest_common.sh@10 -- # set +x 00:10:50.460 ************************************ 00:10:50.460 END TEST accel_deomp_full_mthread 00:10:50.460 ************************************ 00:10:50.460 21:16:05 -- accel/accel.sh@124 -- # [[ n == y ]] 00:10:50.460 21:16:05 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:50.460 21:16:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:50.460 21:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:50.460 21:16:05 -- common/autotest_common.sh@10 -- # set +x 00:10:50.460 21:16:05 -- accel/accel.sh@137 -- # build_accel_config 00:10:50.460 21:16:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:50.460 21:16:05 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:50.460 21:16:05 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:50.460 21:16:05 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:50.460 21:16:05 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:50.460 21:16:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.460 21:16:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:50.460 21:16:05 -- accel/accel.sh@40 -- # local IFS=, 00:10:50.460 21:16:05 -- accel/accel.sh@41 -- # jq -r . 00:10:50.460 ************************************ 00:10:50.460 START TEST accel_dif_functional_tests 00:10:50.460 ************************************ 00:10:50.460 21:16:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:50.460 [2024-04-24 21:16:05.403752] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:10:50.460 [2024-04-24 21:16:05.403847] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076819 ] 00:10:50.720 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.720 [2024-04-24 21:16:05.521470] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:50.720 [2024-04-24 21:16:05.639068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.720 [2024-04-24 21:16:05.639095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.720 [2024-04-24 21:16:05.639099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.720 [2024-04-24 21:16:05.643703] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:50.720 [2024-04-24 21:16:05.651663] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:00.709 00:11:00.709 00:11:00.709 CUnit - A unit testing framework for C - Version 2.1-3 00:11:00.709 http://cunit.sourceforge.net/ 00:11:00.709 00:11:00.709 00:11:00.709 Suite: accel_dif 00:11:00.709 Test: verify: DIF generated, GUARD check ...passed 00:11:00.709 Test: verify: DIF generated, APPTAG check ...passed 00:11:00.709 Test: verify: DIF generated, REFTAG check ...passed 00:11:00.709 Test: verify: DIF not generated, GUARD check ...[2024-04-24 21:16:14.308600] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:00.709 [2024-04-24 21:16:14.308652] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-24 21:16:14.308664] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.308673] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.308680] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.308687] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.308694] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:00.709 [2024-04-24 21:16:14.308703] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:00.709 [2024-04-24 21:16:14.308709] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:00.709 [2024-04-24 21:16:14.308736] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:00.709 [2024-04-24 21:16:14.308745] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=4, offset=0 00:11:00.709 [2024-04-24 21:16:14.308849] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:00.709 passed 00:11:00.709 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 21:16:14.308910] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:00.709 [2024-04-24 21:16:14.308921] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-24 21:16:14.308931] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.308937] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.308944] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.308951] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.308959] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:00.709 [2024-04-24 21:16:14.308965] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:00.709 [2024-04-24 21:16:14.308972] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:00.709 [2024-04-24 21:16:14.308981] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:00.709 [2024-04-24 21:16:14.308991] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:11:00.709 [2024-04-24 21:16:14.309008] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:00.709 passed 00:11:00.709 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 21:16:14.309044] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:00.709 [2024-04-24 21:16:14.309055] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-24 21:16:14.309061] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309068] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309074] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309082] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309088] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:00.709 [2024-04-24 21:16:14.309099] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:00.709 [2024-04-24 21:16:14.309105] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:00.709 [2024-04-24 21:16:14.309115] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:00.709 [2024-04-24 21:16:14.309123] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:11:00.709 [2024-04-24 21:16:14.309142] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:00.709 passed 00:11:00.709 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:00.709 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 21:16:14.309210] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:00.709 [2024-04-24 21:16:14.309219] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-24 21:16:14.309226] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309233] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309240] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309246] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309253] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:00.709 [2024-04-24 21:16:14.309259] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:00.709 [2024-04-24 21:16:14.309273] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:00.709 [2024-04-24 21:16:14.309281] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:00.709 [2024-04-24 21:16:14.309289] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:11:00.709 passed 00:11:00.709 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:00.709 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:00.709 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:00.709 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 21:16:14.309457] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:00.709 [2024-04-24 21:16:14.309468] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-24 21:16:14.309474] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309482] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309489] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309496] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309504] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:00.709 [2024-04-24 21:16:14.309512] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:00.709 [2024-04-24 21:16:14.309518] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:00.709 [2024-04-24 21:16:14.309526] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:00.709 [2024-04-24 21:16:14.309532] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-24 21:16:14.309539] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309544] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309551] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309557] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309565] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:00.709 [2024-04-24 21:16:14.309571] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:00.709 [2024-04-24 21:16:14.309580] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:00.709 [2024-04-24 21:16:14.309587] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:00.709 [2024-04-24 21:16:14.309598] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:11:00.709 [2024-04-24 21:16:14.309607] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x5 00:11:00.709 [2024-04-24 21:16:14.309616] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:passed 00:11:00.709 Test: generate copy: DIF generated, GUARD check ...[2024-04-24 21:16:14.309623] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309630] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309636] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309643] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:00.709 [2024-04-24 21:16:14.309650] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:00.709 [2024-04-24 21:16:14.309657] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:00.709 [2024-04-24 21:16:14.309664] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:00.709 passed 00:11:00.709 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:00.709 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:00.709 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-04-24 21:16:14.309802] idxd.c:1571:idxd_validate_dif_insert_params: *ERROR*: Guard check flag must be set. 00:11:00.709 passed 00:11:00.709 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-04-24 21:16:14.309838] idxd.c:1576:idxd_validate_dif_insert_params: *ERROR*: Application Tag check flag must be set. 00:11:00.709 passed 00:11:00.709 Test: generate copy: DIF generated, no REFTAG check flag set ...[2024-04-24 21:16:14.309879] idxd.c:1581:idxd_validate_dif_insert_params: *ERROR*: Reference Tag check flag must be set. 00:11:00.709 passed 00:11:00.709 Test: generate copy: iovecs-len validate ...[2024-04-24 21:16:14.309917] idxd.c:1608:idxd_validate_dif_insert_iovecs: *ERROR*: Invalid length of data in src (4096) and dst (4176) in iovecs[0]. 00:11:00.709 passed 00:11:00.709 Test: generate copy: buffer alignment validate ...passed 00:11:00.709 00:11:00.709 Run Summary: Type Total Ran Passed Failed Inactive 00:11:00.709 suites 1 1 n/a 0 0 00:11:00.709 tests 20 20 20 0 0 00:11:00.709 asserts 204 204 204 0 n/a 00:11:00.709 00:11:00.709 Elapsed time = 0.003 seconds 00:11:02.615 00:11:02.615 real 0m12.172s 00:11:02.615 user 0m23.613s 00:11:02.615 sys 0m0.243s 00:11:02.615 21:16:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:02.615 21:16:17 -- common/autotest_common.sh@10 -- # set +x 00:11:02.615 ************************************ 00:11:02.615 END TEST accel_dif_functional_tests 00:11:02.615 ************************************ 00:11:02.615 00:11:02.615 real 3m59.248s 00:11:02.615 user 2m34.663s 00:11:02.615 sys 0m7.800s 00:11:02.615 21:16:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:02.615 21:16:17 -- common/autotest_common.sh@10 -- # set +x 00:11:02.615 ************************************ 00:11:02.615 END TEST accel 00:11:02.615 ************************************ 00:11:02.615 21:16:17 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:11:02.615 21:16:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:02.615 21:16:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.615 21:16:17 -- common/autotest_common.sh@10 -- # set +x 00:11:02.874 ************************************ 00:11:02.874 START TEST accel_rpc 00:11:02.874 ************************************ 00:11:02.874 21:16:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:11:02.874 * Looking for test storage... 00:11:02.874 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:11:02.874 21:16:17 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:02.874 21:16:17 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1079420 00:11:02.874 21:16:17 -- accel/accel_rpc.sh@15 -- # waitforlisten 1079420 00:11:02.874 21:16:17 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:02.874 21:16:17 -- common/autotest_common.sh@817 -- # '[' -z 1079420 ']' 00:11:02.874 21:16:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.874 21:16:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:02.874 21:16:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.874 21:16:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:02.874 21:16:17 -- common/autotest_common.sh@10 -- # set +x 00:11:02.874 [2024-04-24 21:16:17.818479] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:11:02.874 [2024-04-24 21:16:17.818594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1079420 ] 00:11:03.135 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.135 [2024-04-24 21:16:17.936970] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.135 [2024-04-24 21:16:18.033496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.702 21:16:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:03.702 21:16:18 -- common/autotest_common.sh@850 -- # return 0 00:11:03.703 21:16:18 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:03.703 21:16:18 -- accel/accel_rpc.sh@45 -- # [[ 1 -gt 0 ]] 00:11:03.703 21:16:18 -- accel/accel_rpc.sh@46 -- # run_test accel_scan_dsa_modules accel_scan_dsa_modules_test_suite 00:11:03.703 21:16:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:03.703 21:16:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:03.703 21:16:18 -- common/autotest_common.sh@10 -- # set +x 00:11:03.703 ************************************ 00:11:03.703 START TEST accel_scan_dsa_modules 00:11:03.703 ************************************ 00:11:03.703 21:16:18 -- common/autotest_common.sh@1111 -- # accel_scan_dsa_modules_test_suite 00:11:03.703 21:16:18 -- accel/accel_rpc.sh@21 -- # rpc_cmd dsa_scan_accel_module 00:11:03.703 21:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.703 21:16:18 -- common/autotest_common.sh@10 -- # set +x 00:11:03.703 [2024-04-24 21:16:18.658015] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:03.703 21:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.703 21:16:18 -- accel/accel_rpc.sh@22 -- # NOT rpc_cmd dsa_scan_accel_module 00:11:03.703 21:16:18 -- common/autotest_common.sh@638 -- # local es=0 00:11:03.703 21:16:18 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd dsa_scan_accel_module 00:11:03.703 21:16:18 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:11:03.703 21:16:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:03.703 21:16:18 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:11:03.703 21:16:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:03.703 21:16:18 -- common/autotest_common.sh@641 -- # rpc_cmd dsa_scan_accel_module 00:11:03.703 21:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.703 21:16:18 -- common/autotest_common.sh@10 -- # set +x 00:11:03.962 request: 00:11:03.962 { 00:11:03.962 "method": "dsa_scan_accel_module", 00:11:03.962 "req_id": 1 00:11:03.962 } 00:11:03.962 Got JSON-RPC error response 00:11:03.962 response: 00:11:03.962 { 00:11:03.962 "code": -114, 00:11:03.962 "message": "Operation already in progress" 00:11:03.962 } 00:11:03.962 21:16:18 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:11:03.962 21:16:18 -- common/autotest_common.sh@641 -- # es=1 00:11:03.962 21:16:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:03.962 21:16:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:03.962 21:16:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:03.962 00:11:03.962 real 0m0.021s 00:11:03.962 user 0m0.004s 00:11:03.962 sys 0m0.003s 00:11:03.962 21:16:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:03.962 21:16:18 -- common/autotest_common.sh@10 -- # set +x 00:11:03.962 ************************************ 00:11:03.962 END TEST accel_scan_dsa_modules 00:11:03.962 ************************************ 00:11:03.962 21:16:18 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:03.962 21:16:18 -- accel/accel_rpc.sh@49 -- # [[ 1 -gt 0 ]] 00:11:03.962 21:16:18 -- accel/accel_rpc.sh@50 -- # run_test accel_scan_iaa_modules accel_scan_iaa_modules_test_suite 00:11:03.962 21:16:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:03.962 21:16:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:03.962 21:16:18 -- common/autotest_common.sh@10 -- # set +x 00:11:03.962 ************************************ 00:11:03.962 START TEST accel_scan_iaa_modules 00:11:03.962 ************************************ 00:11:03.962 21:16:18 -- common/autotest_common.sh@1111 -- # accel_scan_iaa_modules_test_suite 00:11:03.962 21:16:18 -- accel/accel_rpc.sh@29 -- # rpc_cmd iaa_scan_accel_module 00:11:03.962 21:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.962 21:16:18 -- common/autotest_common.sh@10 -- # set +x 00:11:03.962 [2024-04-24 21:16:18.814041] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:03.962 21:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.962 21:16:18 -- accel/accel_rpc.sh@30 -- # NOT rpc_cmd iaa_scan_accel_module 00:11:03.962 21:16:18 -- common/autotest_common.sh@638 -- # local es=0 00:11:03.962 21:16:18 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd iaa_scan_accel_module 00:11:03.962 21:16:18 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:11:03.962 21:16:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:03.962 21:16:18 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:11:03.962 21:16:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:03.962 21:16:18 -- common/autotest_common.sh@641 -- # rpc_cmd iaa_scan_accel_module 00:11:03.962 21:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.962 21:16:18 -- common/autotest_common.sh@10 -- # set +x 00:11:03.962 request: 00:11:03.962 { 00:11:03.962 "method": "iaa_scan_accel_module", 00:11:03.962 "req_id": 1 00:11:03.962 } 00:11:03.962 Got JSON-RPC error response 00:11:03.962 response: 00:11:03.962 { 00:11:03.962 "code": -114, 00:11:03.962 "message": "Operation already in progress" 00:11:03.962 } 00:11:03.962 21:16:18 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:11:03.962 21:16:18 -- common/autotest_common.sh@641 -- # es=1 00:11:03.962 21:16:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:03.962 21:16:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:03.962 21:16:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:03.962 00:11:03.962 real 0m0.024s 00:11:03.962 user 0m0.004s 00:11:03.962 sys 0m0.002s 00:11:03.962 21:16:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:03.962 21:16:18 -- common/autotest_common.sh@10 -- # set +x 00:11:03.962 ************************************ 00:11:03.962 END TEST accel_scan_iaa_modules 00:11:03.962 ************************************ 00:11:03.962 21:16:18 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:03.962 21:16:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:03.962 21:16:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:03.962 21:16:18 -- common/autotest_common.sh@10 -- # set +x 00:11:04.221 ************************************ 00:11:04.221 START TEST accel_assign_opcode 00:11:04.222 ************************************ 00:11:04.222 21:16:18 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:11:04.222 21:16:18 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:04.222 21:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.222 21:16:18 -- common/autotest_common.sh@10 -- # set +x 00:11:04.222 [2024-04-24 21:16:18.970086] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:04.222 21:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.222 21:16:18 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:04.222 21:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.222 21:16:18 -- common/autotest_common.sh@10 -- # set +x 00:11:04.222 [2024-04-24 21:16:18.982066] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:04.222 21:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.222 21:16:18 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:04.222 21:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.222 21:16:18 -- common/autotest_common.sh@10 -- # set +x 00:11:14.272 21:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.272 21:16:27 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:14.272 21:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.272 21:16:27 -- common/autotest_common.sh@10 -- # set +x 00:11:14.272 21:16:27 -- accel/accel_rpc.sh@42 -- # grep software 00:11:14.272 21:16:27 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:14.272 21:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.272 software 00:11:14.272 00:11:14.272 real 0m8.937s 00:11:14.272 user 0m0.036s 00:11:14.272 sys 0m0.006s 00:11:14.272 21:16:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:14.272 21:16:27 -- common/autotest_common.sh@10 -- # set +x 00:11:14.272 ************************************ 00:11:14.272 END TEST accel_assign_opcode 00:11:14.272 ************************************ 00:11:14.272 21:16:27 -- accel/accel_rpc.sh@55 -- # killprocess 1079420 00:11:14.272 21:16:27 -- common/autotest_common.sh@936 -- # '[' -z 1079420 ']' 00:11:14.272 21:16:27 -- common/autotest_common.sh@940 -- # kill -0 1079420 00:11:14.272 21:16:27 -- common/autotest_common.sh@941 -- # uname 00:11:14.272 21:16:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:14.272 21:16:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1079420 00:11:14.272 21:16:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:14.272 21:16:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:14.272 21:16:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1079420' 00:11:14.272 killing process with pid 1079420 00:11:14.272 21:16:27 -- common/autotest_common.sh@955 -- # kill 1079420 00:11:14.272 21:16:27 -- common/autotest_common.sh@960 -- # wait 1079420 00:11:16.819 00:11:16.819 real 0m13.958s 00:11:16.819 user 0m4.457s 00:11:16.819 sys 0m0.807s 00:11:16.819 21:16:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:16.819 21:16:31 -- common/autotest_common.sh@10 -- # set +x 00:11:16.819 ************************************ 00:11:16.819 END TEST accel_rpc 00:11:16.819 ************************************ 00:11:16.819 21:16:31 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:11:16.819 21:16:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:16.819 21:16:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.819 21:16:31 -- common/autotest_common.sh@10 -- # set +x 00:11:16.819 ************************************ 00:11:16.819 START TEST app_cmdline 00:11:16.819 ************************************ 00:11:16.819 21:16:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:11:17.078 * Looking for test storage... 00:11:17.078 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:11:17.078 21:16:31 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:17.078 21:16:31 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1082334 00:11:17.078 21:16:31 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:17.078 21:16:31 -- app/cmdline.sh@18 -- # waitforlisten 1082334 00:11:17.078 21:16:31 -- common/autotest_common.sh@817 -- # '[' -z 1082334 ']' 00:11:17.079 21:16:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.079 21:16:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:17.079 21:16:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.079 21:16:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:17.079 21:16:31 -- common/autotest_common.sh@10 -- # set +x 00:11:17.079 [2024-04-24 21:16:31.874097] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:11:17.079 [2024-04-24 21:16:31.874217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082334 ] 00:11:17.079 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.079 [2024-04-24 21:16:31.979113] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.340 [2024-04-24 21:16:32.074798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.602 21:16:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:17.602 21:16:32 -- common/autotest_common.sh@850 -- # return 0 00:11:17.602 21:16:32 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:11:17.861 { 00:11:17.861 "version": "SPDK v24.05-pre git sha1 ea150257d", 00:11:17.861 "fields": { 00:11:17.861 "major": 24, 00:11:17.861 "minor": 5, 00:11:17.861 "patch": 0, 00:11:17.861 "suffix": "-pre", 00:11:17.861 "commit": "ea150257d" 00:11:17.861 } 00:11:17.861 } 00:11:17.861 21:16:32 -- app/cmdline.sh@22 -- # expected_methods=() 00:11:17.861 21:16:32 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:17.861 21:16:32 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:17.861 21:16:32 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:17.861 21:16:32 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:17.861 21:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:17.861 21:16:32 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:17.861 21:16:32 -- common/autotest_common.sh@10 -- # set +x 00:11:17.861 21:16:32 -- app/cmdline.sh@26 -- # sort 00:11:17.861 21:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:17.861 21:16:32 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:17.861 21:16:32 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:17.861 21:16:32 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:17.861 21:16:32 -- common/autotest_common.sh@638 -- # local es=0 00:11:17.861 21:16:32 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:17.861 21:16:32 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:11:17.861 21:16:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:17.861 21:16:32 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:11:17.861 21:16:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:17.861 21:16:32 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:11:17.861 21:16:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:17.861 21:16:32 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:11:17.861 21:16:32 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:11:17.861 21:16:32 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:18.119 request: 00:11:18.119 { 00:11:18.119 "method": "env_dpdk_get_mem_stats", 00:11:18.119 "req_id": 1 00:11:18.119 } 00:11:18.119 Got JSON-RPC error response 00:11:18.119 response: 00:11:18.119 { 00:11:18.119 "code": -32601, 00:11:18.119 "message": "Method not found" 00:11:18.119 } 00:11:18.119 21:16:32 -- common/autotest_common.sh@641 -- # es=1 00:11:18.119 21:16:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:18.119 21:16:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:18.119 21:16:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:18.119 21:16:32 -- app/cmdline.sh@1 -- # killprocess 1082334 00:11:18.119 21:16:32 -- common/autotest_common.sh@936 -- # '[' -z 1082334 ']' 00:11:18.119 21:16:32 -- common/autotest_common.sh@940 -- # kill -0 1082334 00:11:18.119 21:16:32 -- common/autotest_common.sh@941 -- # uname 00:11:18.119 21:16:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:18.119 21:16:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1082334 00:11:18.119 21:16:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:18.119 21:16:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:18.119 21:16:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1082334' 00:11:18.119 killing process with pid 1082334 00:11:18.119 21:16:32 -- common/autotest_common.sh@955 -- # kill 1082334 00:11:18.119 21:16:32 -- common/autotest_common.sh@960 -- # wait 1082334 00:11:19.059 00:11:19.059 real 0m2.078s 00:11:19.059 user 0m2.213s 00:11:19.059 sys 0m0.472s 00:11:19.059 21:16:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:19.059 21:16:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.059 ************************************ 00:11:19.059 END TEST app_cmdline 00:11:19.059 ************************************ 00:11:19.059 21:16:33 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:11:19.059 21:16:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:19.059 21:16:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:19.059 21:16:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.059 ************************************ 00:11:19.059 START TEST version 00:11:19.059 ************************************ 00:11:19.059 21:16:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:11:19.059 * Looking for test storage... 00:11:19.059 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:11:19.318 21:16:34 -- app/version.sh@17 -- # get_header_version major 00:11:19.318 21:16:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:11:19.318 21:16:34 -- app/version.sh@14 -- # cut -f2 00:11:19.318 21:16:34 -- app/version.sh@14 -- # tr -d '"' 00:11:19.318 21:16:34 -- app/version.sh@17 -- # major=24 00:11:19.318 21:16:34 -- app/version.sh@18 -- # get_header_version minor 00:11:19.318 21:16:34 -- app/version.sh@14 -- # cut -f2 00:11:19.318 21:16:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:11:19.318 21:16:34 -- app/version.sh@14 -- # tr -d '"' 00:11:19.318 21:16:34 -- app/version.sh@18 -- # minor=5 00:11:19.318 21:16:34 -- app/version.sh@19 -- # get_header_version patch 00:11:19.318 21:16:34 -- app/version.sh@14 -- # cut -f2 00:11:19.318 21:16:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:11:19.318 21:16:34 -- app/version.sh@14 -- # tr -d '"' 00:11:19.318 21:16:34 -- app/version.sh@19 -- # patch=0 00:11:19.318 21:16:34 -- app/version.sh@20 -- # get_header_version suffix 00:11:19.318 21:16:34 -- app/version.sh@14 -- # cut -f2 00:11:19.318 21:16:34 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:11:19.318 21:16:34 -- app/version.sh@14 -- # tr -d '"' 00:11:19.318 21:16:34 -- app/version.sh@20 -- # suffix=-pre 00:11:19.318 21:16:34 -- app/version.sh@22 -- # version=24.5 00:11:19.318 21:16:34 -- app/version.sh@25 -- # (( patch != 0 )) 00:11:19.318 21:16:34 -- app/version.sh@28 -- # version=24.5rc0 00:11:19.318 21:16:34 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:11:19.318 21:16:34 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:19.318 21:16:34 -- app/version.sh@30 -- # py_version=24.5rc0 00:11:19.318 21:16:34 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:11:19.318 00:11:19.318 real 0m0.145s 00:11:19.318 user 0m0.073s 00:11:19.318 sys 0m0.100s 00:11:19.318 21:16:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:19.318 21:16:34 -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 ************************************ 00:11:19.318 END TEST version 00:11:19.318 ************************************ 00:11:19.318 21:16:34 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:11:19.318 21:16:34 -- spdk/autotest.sh@194 -- # uname -s 00:11:19.318 21:16:34 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:19.318 21:16:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:19.318 21:16:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:19.318 21:16:34 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:19.318 21:16:34 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:11:19.318 21:16:34 -- spdk/autotest.sh@258 -- # timing_exit lib 00:11:19.318 21:16:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:19.318 21:16:34 -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 21:16:34 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:11:19.318 21:16:34 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:11:19.318 21:16:34 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:11:19.318 21:16:34 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:11:19.318 21:16:34 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:11:19.318 21:16:34 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:11:19.318 21:16:34 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:19.318 21:16:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:19.318 21:16:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:19.318 21:16:34 -- common/autotest_common.sh@10 -- # set +x 00:11:19.318 ************************************ 00:11:19.318 START TEST nvmf_tcp 00:11:19.318 ************************************ 00:11:19.318 21:16:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:19.576 * Looking for test storage... 00:11:19.576 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf 00:11:19.576 21:16:34 -- nvmf/nvmf.sh@10 -- # uname -s 00:11:19.576 21:16:34 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:19.576 21:16:34 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.576 21:16:34 -- nvmf/common.sh@7 -- # uname -s 00:11:19.576 21:16:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.576 21:16:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.576 21:16:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.576 21:16:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.576 21:16:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.576 21:16:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.576 21:16:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.576 21:16:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.576 21:16:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.576 21:16:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.576 21:16:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:11:19.576 21:16:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:11:19.576 21:16:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.576 21:16:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.576 21:16:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:19.576 21:16:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.576 21:16:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:19.576 21:16:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.576 21:16:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.576 21:16:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.576 21:16:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.576 21:16:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.576 21:16:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.576 21:16:34 -- paths/export.sh@5 -- # export PATH 00:11:19.576 21:16:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.576 21:16:34 -- nvmf/common.sh@47 -- # : 0 00:11:19.576 21:16:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.576 21:16:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.576 21:16:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.576 21:16:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.576 21:16:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.576 21:16:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.576 21:16:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.576 21:16:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.576 21:16:34 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:19.576 21:16:34 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:11:19.576 21:16:34 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:11:19.576 21:16:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:19.576 21:16:34 -- common/autotest_common.sh@10 -- # set +x 00:11:19.576 21:16:34 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:11:19.576 21:16:34 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:19.576 21:16:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:19.576 21:16:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:19.576 21:16:34 -- common/autotest_common.sh@10 -- # set +x 00:11:19.576 ************************************ 00:11:19.576 START TEST nvmf_example 00:11:19.576 ************************************ 00:11:19.576 21:16:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:19.576 * Looking for test storage... 00:11:19.576 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:19.576 21:16:34 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.576 21:16:34 -- nvmf/common.sh@7 -- # uname -s 00:11:19.577 21:16:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.577 21:16:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.577 21:16:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.577 21:16:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.577 21:16:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.577 21:16:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.577 21:16:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.577 21:16:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.577 21:16:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.577 21:16:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.577 21:16:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:11:19.577 21:16:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:11:19.577 21:16:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.577 21:16:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.577 21:16:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:19.577 21:16:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.577 21:16:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:19.577 21:16:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.577 21:16:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.577 21:16:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.577 21:16:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.577 21:16:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.577 21:16:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.577 21:16:34 -- paths/export.sh@5 -- # export PATH 00:11:19.577 21:16:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.577 21:16:34 -- nvmf/common.sh@47 -- # : 0 00:11:19.577 21:16:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.577 21:16:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.577 21:16:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.577 21:16:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.577 21:16:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.577 21:16:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.577 21:16:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.577 21:16:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.577 21:16:34 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:19.577 21:16:34 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:19.577 21:16:34 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:19.577 21:16:34 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:19.577 21:16:34 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:19.577 21:16:34 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:19.577 21:16:34 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:19.577 21:16:34 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:19.577 21:16:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:19.577 21:16:34 -- common/autotest_common.sh@10 -- # set +x 00:11:19.577 21:16:34 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:19.577 21:16:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:19.577 21:16:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.577 21:16:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:19.577 21:16:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:19.577 21:16:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:19.577 21:16:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.577 21:16:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.577 21:16:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.577 21:16:34 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:11:19.577 21:16:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:19.577 21:16:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:19.577 21:16:34 -- common/autotest_common.sh@10 -- # set +x 00:11:24.859 21:16:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:24.859 21:16:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:24.859 21:16:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:24.859 21:16:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:24.859 21:16:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:24.859 21:16:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:24.859 21:16:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:24.859 21:16:39 -- nvmf/common.sh@295 -- # net_devs=() 00:11:24.859 21:16:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:24.859 21:16:39 -- nvmf/common.sh@296 -- # e810=() 00:11:24.859 21:16:39 -- nvmf/common.sh@296 -- # local -ga e810 00:11:24.859 21:16:39 -- nvmf/common.sh@297 -- # x722=() 00:11:24.859 21:16:39 -- nvmf/common.sh@297 -- # local -ga x722 00:11:24.859 21:16:39 -- nvmf/common.sh@298 -- # mlx=() 00:11:24.859 21:16:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:24.859 21:16:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.859 21:16:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.859 21:16:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.859 21:16:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.859 21:16:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.859 21:16:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.859 21:16:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.859 21:16:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.859 21:16:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.859 21:16:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.859 21:16:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.859 21:16:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:24.859 21:16:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:24.859 21:16:39 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:11:24.859 21:16:39 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:11:24.859 21:16:39 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:11:24.859 21:16:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:24.860 21:16:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.860 21:16:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:11:24.860 Found 0000:27:00.0 (0x8086 - 0x159b) 00:11:24.860 21:16:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.860 21:16:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.860 21:16:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.860 21:16:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.860 21:16:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.860 21:16:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.860 21:16:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:11:24.860 Found 0000:27:00.1 (0x8086 - 0x159b) 00:11:24.860 21:16:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.860 21:16:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.860 21:16:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.860 21:16:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.860 21:16:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.860 21:16:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:24.860 21:16:39 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:11:24.860 21:16:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.860 21:16:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.860 21:16:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:24.860 21:16:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.860 21:16:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:11:24.860 Found net devices under 0000:27:00.0: cvl_0_0 00:11:24.860 21:16:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.860 21:16:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.860 21:16:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.860 21:16:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:24.860 21:16:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.860 21:16:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:11:24.860 Found net devices under 0000:27:00.1: cvl_0_1 00:11:24.860 21:16:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.860 21:16:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:24.860 21:16:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:24.860 21:16:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:24.860 21:16:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:24.860 21:16:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:24.860 21:16:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.860 21:16:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.860 21:16:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.860 21:16:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:24.860 21:16:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.860 21:16:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.860 21:16:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:24.860 21:16:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.860 21:16:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.860 21:16:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:25.121 21:16:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:25.121 21:16:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.121 21:16:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.121 21:16:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.121 21:16:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.121 21:16:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:25.121 21:16:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.121 21:16:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.121 21:16:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.121 21:16:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:25.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:11:25.121 00:11:25.121 --- 10.0.0.2 ping statistics --- 00:11:25.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.121 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:11:25.121 21:16:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:11:25.121 00:11:25.121 --- 10.0.0.1 ping statistics --- 00:11:25.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.121 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:11:25.121 21:16:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.121 21:16:40 -- nvmf/common.sh@411 -- # return 0 00:11:25.121 21:16:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:25.121 21:16:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.121 21:16:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:25.121 21:16:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:25.121 21:16:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.121 21:16:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:25.121 21:16:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:25.121 21:16:40 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:25.121 21:16:40 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:25.121 21:16:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:25.121 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:11:25.121 21:16:40 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:25.121 21:16:40 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:25.121 21:16:40 -- target/nvmf_example.sh@34 -- # nvmfpid=1086353 00:11:25.121 21:16:40 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:25.380 21:16:40 -- target/nvmf_example.sh@36 -- # waitforlisten 1086353 00:11:25.380 21:16:40 -- common/autotest_common.sh@817 -- # '[' -z 1086353 ']' 00:11:25.380 21:16:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.380 21:16:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:25.380 21:16:40 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:25.380 21:16:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.380 21:16:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:25.380 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:11:25.380 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.946 21:16:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:25.946 21:16:40 -- common/autotest_common.sh@850 -- # return 0 00:11:25.946 21:16:40 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:25.946 21:16:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:25.946 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 21:16:40 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.206 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.206 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.206 21:16:40 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:26.206 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.206 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.206 21:16:40 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:26.206 21:16:40 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:26.206 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.206 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.206 21:16:40 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:26.206 21:16:40 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:26.206 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.206 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.206 21:16:40 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.206 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.206 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 21:16:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.206 21:16:41 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:26.206 21:16:41 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:26.206 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.421 Initializing NVMe Controllers 00:11:38.421 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:38.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:38.421 Initialization complete. Launching workers. 00:11:38.421 ======================================================== 00:11:38.421 Latency(us) 00:11:38.421 Device Information : IOPS MiB/s Average min max 00:11:38.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18273.93 71.38 3501.97 709.09 16232.65 00:11:38.421 ======================================================== 00:11:38.421 Total : 18273.93 71.38 3501.97 709.09 16232.65 00:11:38.421 00:11:38.421 21:16:51 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:38.421 21:16:51 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:38.421 21:16:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:38.421 21:16:51 -- nvmf/common.sh@117 -- # sync 00:11:38.421 21:16:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:38.421 21:16:51 -- nvmf/common.sh@120 -- # set +e 00:11:38.421 21:16:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:38.421 21:16:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:38.421 rmmod nvme_tcp 00:11:38.421 rmmod nvme_fabrics 00:11:38.421 rmmod nvme_keyring 00:11:38.421 21:16:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:38.421 21:16:51 -- nvmf/common.sh@124 -- # set -e 00:11:38.421 21:16:51 -- nvmf/common.sh@125 -- # return 0 00:11:38.421 21:16:51 -- nvmf/common.sh@478 -- # '[' -n 1086353 ']' 00:11:38.421 21:16:51 -- nvmf/common.sh@479 -- # killprocess 1086353 00:11:38.421 21:16:51 -- common/autotest_common.sh@936 -- # '[' -z 1086353 ']' 00:11:38.421 21:16:51 -- common/autotest_common.sh@940 -- # kill -0 1086353 00:11:38.421 21:16:51 -- common/autotest_common.sh@941 -- # uname 00:11:38.421 21:16:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:38.421 21:16:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1086353 00:11:38.421 21:16:51 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:11:38.421 21:16:51 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:11:38.421 21:16:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1086353' 00:11:38.421 killing process with pid 1086353 00:11:38.421 21:16:51 -- common/autotest_common.sh@955 -- # kill 1086353 00:11:38.421 21:16:51 -- common/autotest_common.sh@960 -- # wait 1086353 00:11:38.421 nvmf threads initialize successfully 00:11:38.421 bdev subsystem init successfully 00:11:38.421 created a nvmf target service 00:11:38.421 create targets's poll groups done 00:11:38.421 all subsystems of target started 00:11:38.421 nvmf target is running 00:11:38.421 all subsystems of target stopped 00:11:38.421 destroy targets's poll groups done 00:11:38.421 destroyed the nvmf target service 00:11:38.421 bdev subsystem finish successfully 00:11:38.421 nvmf threads destroy successfully 00:11:38.421 21:16:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:38.421 21:16:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:38.421 21:16:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:38.421 21:16:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:38.421 21:16:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:38.421 21:16:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.421 21:16:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.421 21:16:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.991 21:16:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:38.991 21:16:53 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:38.991 21:16:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:38.991 21:16:53 -- common/autotest_common.sh@10 -- # set +x 00:11:38.991 00:11:38.991 real 0m19.523s 00:11:38.991 user 0m44.438s 00:11:38.991 sys 0m6.214s 00:11:38.991 21:16:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:38.991 21:16:53 -- common/autotest_common.sh@10 -- # set +x 00:11:38.991 ************************************ 00:11:38.991 END TEST nvmf_example 00:11:38.991 ************************************ 00:11:39.251 21:16:53 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:39.251 21:16:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:39.251 21:16:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:39.251 21:16:53 -- common/autotest_common.sh@10 -- # set +x 00:11:39.251 ************************************ 00:11:39.251 START TEST nvmf_filesystem 00:11:39.251 ************************************ 00:11:39.251 21:16:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:39.251 * Looking for test storage... 00:11:39.251 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:39.251 21:16:54 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh 00:11:39.251 21:16:54 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:39.251 21:16:54 -- common/autotest_common.sh@34 -- # set -e 00:11:39.251 21:16:54 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:39.251 21:16:54 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:39.251 21:16:54 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/dsa-phy-autotest/spdk/../output ']' 00:11:39.251 21:16:54 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:39.251 21:16:54 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh 00:11:39.251 21:16:54 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:39.251 21:16:54 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:39.251 21:16:54 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:39.251 21:16:54 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:39.251 21:16:54 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:39.251 21:16:54 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:39.251 21:16:54 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:39.251 21:16:54 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:39.251 21:16:54 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:39.251 21:16:54 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:39.251 21:16:54 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:39.251 21:16:54 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:39.251 21:16:54 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:39.251 21:16:54 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:39.251 21:16:54 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:39.251 21:16:54 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:39.251 21:16:54 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:39.251 21:16:54 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:39.251 21:16:54 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:11:39.251 21:16:54 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:39.251 21:16:54 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:39.251 21:16:54 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:39.251 21:16:54 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:39.251 21:16:54 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:39.251 21:16:54 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:39.251 21:16:54 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:39.251 21:16:54 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:39.251 21:16:54 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:39.251 21:16:54 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:39.251 21:16:54 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:39.251 21:16:54 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:39.251 21:16:54 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:39.251 21:16:54 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:39.251 21:16:54 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:39.251 21:16:54 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:39.251 21:16:54 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:11:39.251 21:16:54 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:39.251 21:16:54 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:39.251 21:16:54 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:39.252 21:16:54 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:39.252 21:16:54 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:39.252 21:16:54 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:39.252 21:16:54 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:39.252 21:16:54 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:39.252 21:16:54 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:39.252 21:16:54 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:11:39.252 21:16:54 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:11:39.252 21:16:54 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:39.252 21:16:54 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:11:39.252 21:16:54 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:11:39.252 21:16:54 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:11:39.252 21:16:54 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:11:39.252 21:16:54 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:11:39.252 21:16:54 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:11:39.252 21:16:54 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:11:39.252 21:16:54 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:11:39.252 21:16:54 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:11:39.252 21:16:54 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:11:39.252 21:16:54 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:11:39.252 21:16:54 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:11:39.252 21:16:54 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:11:39.252 21:16:54 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:11:39.252 21:16:54 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:11:39.252 21:16:54 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:11:39.252 21:16:54 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:11:39.252 21:16:54 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:11:39.252 21:16:54 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:11:39.252 21:16:54 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:39.252 21:16:54 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:11:39.252 21:16:54 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:11:39.252 21:16:54 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:11:39.252 21:16:54 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:11:39.252 21:16:54 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:11:39.252 21:16:54 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:11:39.252 21:16:54 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:11:39.252 21:16:54 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:11:39.252 21:16:54 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:11:39.252 21:16:54 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:11:39.252 21:16:54 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:11:39.252 21:16:54 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:39.252 21:16:54 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:11:39.252 21:16:54 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:11:39.252 21:16:54 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:11:39.252 21:16:54 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:11:39.252 21:16:54 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:11:39.252 21:16:54 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:11:39.252 21:16:54 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:11:39.252 21:16:54 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:11:39.252 21:16:54 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:11:39.252 21:16:54 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:11:39.252 21:16:54 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:39.252 21:16:54 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:39.252 21:16:54 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:39.252 21:16:54 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:39.252 21:16:54 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:39.252 21:16:54 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:39.252 21:16:54 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/config.h ]] 00:11:39.252 21:16:54 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:39.252 #define SPDK_CONFIG_H 00:11:39.252 #define SPDK_CONFIG_APPS 1 00:11:39.252 #define SPDK_CONFIG_ARCH native 00:11:39.252 #define SPDK_CONFIG_ASAN 1 00:11:39.252 #undef SPDK_CONFIG_AVAHI 00:11:39.252 #undef SPDK_CONFIG_CET 00:11:39.252 #define SPDK_CONFIG_COVERAGE 1 00:11:39.252 #define SPDK_CONFIG_CROSS_PREFIX 00:11:39.252 #undef SPDK_CONFIG_CRYPTO 00:11:39.252 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:39.252 #undef SPDK_CONFIG_CUSTOMOCF 00:11:39.252 #undef SPDK_CONFIG_DAOS 00:11:39.252 #define SPDK_CONFIG_DAOS_DIR 00:11:39.252 #define SPDK_CONFIG_DEBUG 1 00:11:39.252 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:39.252 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:11:39.252 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:39.252 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:39.252 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:39.252 #define SPDK_CONFIG_ENV /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:11:39.252 #define SPDK_CONFIG_EXAMPLES 1 00:11:39.252 #undef SPDK_CONFIG_FC 00:11:39.252 #define SPDK_CONFIG_FC_PATH 00:11:39.252 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:39.252 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:39.252 #undef SPDK_CONFIG_FUSE 00:11:39.252 #undef SPDK_CONFIG_FUZZER 00:11:39.252 #define SPDK_CONFIG_FUZZER_LIB 00:11:39.252 #undef SPDK_CONFIG_GOLANG 00:11:39.252 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:39.252 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:39.252 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:39.252 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:11:39.252 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:39.252 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:39.252 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:39.252 #define SPDK_CONFIG_IDXD 1 00:11:39.252 #undef SPDK_CONFIG_IDXD_KERNEL 00:11:39.252 #undef SPDK_CONFIG_IPSEC_MB 00:11:39.252 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:39.252 #define SPDK_CONFIG_ISAL 1 00:11:39.252 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:39.252 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:39.252 #define SPDK_CONFIG_LIBDIR 00:11:39.252 #undef SPDK_CONFIG_LTO 00:11:39.252 #define SPDK_CONFIG_MAX_LCORES 00:11:39.252 #define SPDK_CONFIG_NVME_CUSE 1 00:11:39.252 #undef SPDK_CONFIG_OCF 00:11:39.252 #define SPDK_CONFIG_OCF_PATH 00:11:39.252 #define SPDK_CONFIG_OPENSSL_PATH 00:11:39.252 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:39.252 #define SPDK_CONFIG_PGO_DIR 00:11:39.252 #undef SPDK_CONFIG_PGO_USE 00:11:39.252 #define SPDK_CONFIG_PREFIX /usr/local 00:11:39.252 #undef SPDK_CONFIG_RAID5F 00:11:39.252 #undef SPDK_CONFIG_RBD 00:11:39.252 #define SPDK_CONFIG_RDMA 1 00:11:39.252 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:39.252 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:39.252 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:39.252 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:39.252 #define SPDK_CONFIG_SHARED 1 00:11:39.252 #undef SPDK_CONFIG_SMA 00:11:39.252 #define SPDK_CONFIG_TESTS 1 00:11:39.252 #undef SPDK_CONFIG_TSAN 00:11:39.252 #define SPDK_CONFIG_UBLK 1 00:11:39.252 #define SPDK_CONFIG_UBSAN 1 00:11:39.252 #undef SPDK_CONFIG_UNIT_TESTS 00:11:39.252 #undef SPDK_CONFIG_URING 00:11:39.252 #define SPDK_CONFIG_URING_PATH 00:11:39.252 #undef SPDK_CONFIG_URING_ZNS 00:11:39.252 #undef SPDK_CONFIG_USDT 00:11:39.252 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:39.252 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:39.252 #undef SPDK_CONFIG_VFIO_USER 00:11:39.252 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:39.252 #define SPDK_CONFIG_VHOST 1 00:11:39.252 #define SPDK_CONFIG_VIRTIO 1 00:11:39.252 #undef SPDK_CONFIG_VTUNE 00:11:39.252 #define SPDK_CONFIG_VTUNE_DIR 00:11:39.252 #define SPDK_CONFIG_WERROR 1 00:11:39.252 #define SPDK_CONFIG_WPDK_DIR 00:11:39.252 #undef SPDK_CONFIG_XNVME 00:11:39.252 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:39.252 21:16:54 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:39.252 21:16:54 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:39.252 21:16:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.252 21:16:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.252 21:16:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.252 21:16:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.252 21:16:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.252 21:16:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.252 21:16:54 -- paths/export.sh@5 -- # export PATH 00:11:39.253 21:16:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.253 21:16:54 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:11:39.253 21:16:54 -- pm/common@6 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:11:39.253 21:16:54 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:11:39.253 21:16:54 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:11:39.253 21:16:54 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:39.253 21:16:54 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:11:39.253 21:16:54 -- pm/common@67 -- # TEST_TAG=N/A 00:11:39.253 21:16:54 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/dsa-phy-autotest/spdk/.run_test_name 00:11:39.253 21:16:54 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:11:39.253 21:16:54 -- pm/common@71 -- # uname -s 00:11:39.253 21:16:54 -- pm/common@71 -- # PM_OS=Linux 00:11:39.253 21:16:54 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:39.253 21:16:54 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:11:39.253 21:16:54 -- pm/common@76 -- # [[ Linux == Linux ]] 00:11:39.253 21:16:54 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:11:39.253 21:16:54 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:11:39.253 21:16:54 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:39.253 21:16:54 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:39.253 21:16:54 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:11:39.253 21:16:54 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:11:39.253 21:16:54 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:11:39.253 21:16:54 -- common/autotest_common.sh@57 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:11:39.253 21:16:54 -- common/autotest_common.sh@61 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:39.253 21:16:54 -- common/autotest_common.sh@63 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:11:39.253 21:16:54 -- common/autotest_common.sh@65 -- # : 1 00:11:39.253 21:16:54 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:39.253 21:16:54 -- common/autotest_common.sh@67 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:11:39.253 21:16:54 -- common/autotest_common.sh@69 -- # : 00:11:39.253 21:16:54 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:11:39.253 21:16:54 -- common/autotest_common.sh@71 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:11:39.253 21:16:54 -- common/autotest_common.sh@73 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:11:39.253 21:16:54 -- common/autotest_common.sh@75 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:11:39.253 21:16:54 -- common/autotest_common.sh@77 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:39.253 21:16:54 -- common/autotest_common.sh@79 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:11:39.253 21:16:54 -- common/autotest_common.sh@81 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:11:39.253 21:16:54 -- common/autotest_common.sh@83 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:11:39.253 21:16:54 -- common/autotest_common.sh@85 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:11:39.253 21:16:54 -- common/autotest_common.sh@87 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:11:39.253 21:16:54 -- common/autotest_common.sh@89 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:11:39.253 21:16:54 -- common/autotest_common.sh@91 -- # : 1 00:11:39.253 21:16:54 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:11:39.253 21:16:54 -- common/autotest_common.sh@93 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:11:39.253 21:16:54 -- common/autotest_common.sh@95 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:39.253 21:16:54 -- common/autotest_common.sh@97 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:11:39.253 21:16:54 -- common/autotest_common.sh@99 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:11:39.253 21:16:54 -- common/autotest_common.sh@101 -- # : tcp 00:11:39.253 21:16:54 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:39.253 21:16:54 -- common/autotest_common.sh@103 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:11:39.253 21:16:54 -- common/autotest_common.sh@105 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:11:39.253 21:16:54 -- common/autotest_common.sh@107 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:11:39.253 21:16:54 -- common/autotest_common.sh@109 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:11:39.253 21:16:54 -- common/autotest_common.sh@111 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:11:39.253 21:16:54 -- common/autotest_common.sh@113 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:11:39.253 21:16:54 -- common/autotest_common.sh@115 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:11:39.253 21:16:54 -- common/autotest_common.sh@117 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:39.253 21:16:54 -- common/autotest_common.sh@119 -- # : 1 00:11:39.253 21:16:54 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:11:39.253 21:16:54 -- common/autotest_common.sh@121 -- # : 1 00:11:39.253 21:16:54 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:11:39.253 21:16:54 -- common/autotest_common.sh@123 -- # : 00:11:39.253 21:16:54 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:39.253 21:16:54 -- common/autotest_common.sh@125 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:11:39.253 21:16:54 -- common/autotest_common.sh@127 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:11:39.253 21:16:54 -- common/autotest_common.sh@129 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:11:39.253 21:16:54 -- common/autotest_common.sh@131 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:11:39.253 21:16:54 -- common/autotest_common.sh@133 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:11:39.253 21:16:54 -- common/autotest_common.sh@135 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:11:39.253 21:16:54 -- common/autotest_common.sh@137 -- # : 00:11:39.253 21:16:54 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:11:39.253 21:16:54 -- common/autotest_common.sh@139 -- # : true 00:11:39.253 21:16:54 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:11:39.253 21:16:54 -- common/autotest_common.sh@141 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:11:39.253 21:16:54 -- common/autotest_common.sh@143 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:11:39.253 21:16:54 -- common/autotest_common.sh@145 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:11:39.253 21:16:54 -- common/autotest_common.sh@147 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:11:39.253 21:16:54 -- common/autotest_common.sh@149 -- # : 0 00:11:39.253 21:16:54 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:11:39.254 21:16:54 -- common/autotest_common.sh@151 -- # : 0 00:11:39.254 21:16:54 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:11:39.254 21:16:54 -- common/autotest_common.sh@153 -- # : 00:11:39.254 21:16:54 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:11:39.254 21:16:54 -- common/autotest_common.sh@155 -- # : 0 00:11:39.254 21:16:54 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:11:39.254 21:16:54 -- common/autotest_common.sh@157 -- # : 0 00:11:39.254 21:16:54 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:11:39.254 21:16:54 -- common/autotest_common.sh@159 -- # : 0 00:11:39.254 21:16:54 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:11:39.254 21:16:54 -- common/autotest_common.sh@161 -- # : 1 00:11:39.254 21:16:54 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:11:39.254 21:16:54 -- common/autotest_common.sh@163 -- # : 1 00:11:39.254 21:16:54 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:11:39.254 21:16:54 -- common/autotest_common.sh@166 -- # : 00:11:39.254 21:16:54 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:11:39.254 21:16:54 -- common/autotest_common.sh@168 -- # : 0 00:11:39.254 21:16:54 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:11:39.254 21:16:54 -- common/autotest_common.sh@170 -- # : 0 00:11:39.254 21:16:54 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:39.254 21:16:54 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:11:39.254 21:16:54 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:11:39.254 21:16:54 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:11:39.254 21:16:54 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:11:39.254 21:16:54 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:39.254 21:16:54 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:39.254 21:16:54 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:39.254 21:16:54 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:39.254 21:16:54 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:39.254 21:16:54 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:39.254 21:16:54 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:11:39.254 21:16:54 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:11:39.254 21:16:54 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:39.254 21:16:54 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:11:39.254 21:16:54 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:39.254 21:16:54 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:39.254 21:16:54 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:39.254 21:16:54 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:39.254 21:16:54 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:39.254 21:16:54 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:11:39.254 21:16:54 -- common/autotest_common.sh@199 -- # cat 00:11:39.254 21:16:54 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:11:39.254 21:16:54 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:39.254 21:16:54 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:39.254 21:16:54 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:39.254 21:16:54 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:39.254 21:16:54 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:11:39.254 21:16:54 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:11:39.254 21:16:54 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:11:39.254 21:16:54 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:11:39.254 21:16:54 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:11:39.254 21:16:54 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:11:39.254 21:16:54 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:39.254 21:16:54 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:39.254 21:16:54 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:39.254 21:16:54 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:39.254 21:16:54 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:39.254 21:16:54 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:39.254 21:16:54 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:39.254 21:16:54 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:39.254 21:16:54 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:11:39.254 21:16:54 -- common/autotest_common.sh@252 -- # export valgrind= 00:11:39.254 21:16:54 -- common/autotest_common.sh@252 -- # valgrind= 00:11:39.254 21:16:54 -- common/autotest_common.sh@258 -- # uname -s 00:11:39.254 21:16:54 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:11:39.254 21:16:54 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:11:39.254 21:16:54 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:11:39.254 21:16:54 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:11:39.254 21:16:54 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:11:39.254 21:16:54 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:11:39.254 21:16:54 -- common/autotest_common.sh@268 -- # MAKE=make 00:11:39.254 21:16:54 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j128 00:11:39.254 21:16:54 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:11:39.254 21:16:54 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:11:39.254 21:16:54 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:11:39.254 21:16:54 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:11:39.254 21:16:54 -- common/autotest_common.sh@289 -- # for i in "$@" 00:11:39.254 21:16:54 -- common/autotest_common.sh@290 -- # case "$i" in 00:11:39.254 21:16:54 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:11:39.254 21:16:54 -- common/autotest_common.sh@307 -- # [[ -z 1089385 ]] 00:11:39.254 21:16:54 -- common/autotest_common.sh@307 -- # kill -0 1089385 00:11:39.254 21:16:54 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:39.254 21:16:54 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:11:39.254 21:16:54 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:11:39.254 21:16:54 -- common/autotest_common.sh@320 -- # local mount target_dir 00:11:39.254 21:16:54 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:11:39.254 21:16:54 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:11:39.254 21:16:54 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:11:39.254 21:16:54 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:11:39.254 21:16:54 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.1OcVnq 00:11:39.254 21:16:54 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:39.254 21:16:54 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:11:39.254 21:16:54 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:11:39.254 21:16:54 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target /tmp/spdk.1OcVnq/tests/target /tmp/spdk.1OcVnq 00:11:39.254 21:16:54 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:11:39.254 21:16:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:11:39.254 21:16:54 -- common/autotest_common.sh@316 -- # df -T 00:11:39.254 21:16:54 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:11:39.254 21:16:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:11:39.254 21:16:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:11:39.254 21:16:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:11:39.254 21:16:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:11:39.255 21:16:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:11:39.255 21:16:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:11:39.255 21:16:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:11:39.255 21:16:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:11:39.255 21:16:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=262459195392 00:11:39.255 21:16:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=270047416320 00:11:39.255 21:16:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=7588220928 00:11:39.255 21:16:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:11:39.255 21:16:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:11:39.255 21:16:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:11:39.255 21:16:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=135021092864 00:11:39.255 21:16:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=135023706112 00:11:39.255 21:16:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:11:39.255 21:16:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:11:39.255 21:16:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:11:39.255 21:16:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:11:39.255 21:16:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=53999808512 00:11:39.255 21:16:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=54009483264 00:11:39.255 21:16:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=9674752 00:11:39.255 21:16:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:11:39.255 21:16:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=efivarfs 00:11:39.255 21:16:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=efivarfs 00:11:39.255 21:16:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=200704 00:11:39.255 21:16:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=507904 00:11:39.255 21:16:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=303104 00:11:39.255 21:16:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:11:39.255 21:16:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:11:39.255 21:16:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:11:39.255 21:16:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=135023042560 00:11:39.255 21:16:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=135023710208 00:11:39.255 21:16:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=667648 00:11:39.255 21:16:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:11:39.255 21:16:54 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:11:39.255 21:16:54 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:11:39.255 21:16:54 -- common/autotest_common.sh@351 -- # avails["$mount"]=27004735488 00:11:39.255 21:16:54 -- common/autotest_common.sh@351 -- # sizes["$mount"]=27004739584 00:11:39.255 21:16:54 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:11:39.255 21:16:54 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:11:39.255 21:16:54 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:11:39.255 * Looking for test storage... 00:11:39.255 21:16:54 -- common/autotest_common.sh@357 -- # local target_space new_size 00:11:39.255 21:16:54 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:11:39.255 21:16:54 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:39.255 21:16:54 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:39.516 21:16:54 -- common/autotest_common.sh@361 -- # mount=/ 00:11:39.516 21:16:54 -- common/autotest_common.sh@363 -- # target_space=262459195392 00:11:39.516 21:16:54 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:11:39.516 21:16:54 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:11:39.516 21:16:54 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:11:39.516 21:16:54 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:11:39.516 21:16:54 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:11:39.516 21:16:54 -- common/autotest_common.sh@370 -- # new_size=9802813440 00:11:39.516 21:16:54 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:39.516 21:16:54 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:39.516 21:16:54 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:39.516 21:16:54 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:39.516 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:39.516 21:16:54 -- common/autotest_common.sh@378 -- # return 0 00:11:39.517 21:16:54 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:39.517 21:16:54 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:39.517 21:16:54 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:39.517 21:16:54 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:39.517 21:16:54 -- common/autotest_common.sh@1673 -- # true 00:11:39.517 21:16:54 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:39.517 21:16:54 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:11:39.517 21:16:54 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:11:39.517 21:16:54 -- common/autotest_common.sh@27 -- # exec 00:11:39.517 21:16:54 -- common/autotest_common.sh@29 -- # exec 00:11:39.517 21:16:54 -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:39.517 21:16:54 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:39.517 21:16:54 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:39.517 21:16:54 -- common/autotest_common.sh@18 -- # set -x 00:11:39.517 21:16:54 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.517 21:16:54 -- nvmf/common.sh@7 -- # uname -s 00:11:39.517 21:16:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.517 21:16:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.517 21:16:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.517 21:16:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.517 21:16:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.517 21:16:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.517 21:16:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.517 21:16:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.517 21:16:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.517 21:16:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.517 21:16:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:11:39.517 21:16:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:11:39.517 21:16:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.517 21:16:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.517 21:16:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:39.517 21:16:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.517 21:16:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:39.517 21:16:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.517 21:16:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.517 21:16:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.517 21:16:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.517 21:16:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.517 21:16:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.517 21:16:54 -- paths/export.sh@5 -- # export PATH 00:11:39.517 21:16:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.517 21:16:54 -- nvmf/common.sh@47 -- # : 0 00:11:39.517 21:16:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.517 21:16:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.517 21:16:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.517 21:16:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.517 21:16:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.517 21:16:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.517 21:16:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.517 21:16:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.517 21:16:54 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:39.517 21:16:54 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:39.517 21:16:54 -- target/filesystem.sh@15 -- # nvmftestinit 00:11:39.517 21:16:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:39.517 21:16:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.517 21:16:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:39.517 21:16:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:39.517 21:16:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:39.517 21:16:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.517 21:16:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.517 21:16:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.517 21:16:54 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:11:39.517 21:16:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:39.517 21:16:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:39.517 21:16:54 -- common/autotest_common.sh@10 -- # set +x 00:11:46.101 21:17:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:46.101 21:17:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:46.101 21:17:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:46.101 21:17:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:46.101 21:17:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:46.101 21:17:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:46.101 21:17:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:46.101 21:17:00 -- nvmf/common.sh@295 -- # net_devs=() 00:11:46.101 21:17:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:46.101 21:17:00 -- nvmf/common.sh@296 -- # e810=() 00:11:46.101 21:17:00 -- nvmf/common.sh@296 -- # local -ga e810 00:11:46.101 21:17:00 -- nvmf/common.sh@297 -- # x722=() 00:11:46.101 21:17:00 -- nvmf/common.sh@297 -- # local -ga x722 00:11:46.101 21:17:00 -- nvmf/common.sh@298 -- # mlx=() 00:11:46.101 21:17:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:46.101 21:17:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.101 21:17:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.101 21:17:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.101 21:17:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.101 21:17:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.101 21:17:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.101 21:17:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.101 21:17:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.101 21:17:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.101 21:17:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.101 21:17:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.101 21:17:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:46.101 21:17:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:46.101 21:17:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.101 21:17:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:11:46.101 Found 0000:27:00.0 (0x8086 - 0x159b) 00:11:46.101 21:17:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.101 21:17:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:11:46.101 Found 0000:27:00.1 (0x8086 - 0x159b) 00:11:46.101 21:17:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:46.101 21:17:00 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.101 21:17:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.101 21:17:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:46.101 21:17:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.101 21:17:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:11:46.101 Found net devices under 0000:27:00.0: cvl_0_0 00:11:46.101 21:17:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.101 21:17:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.101 21:17:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.101 21:17:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:46.101 21:17:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.101 21:17:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:11:46.101 Found net devices under 0000:27:00.1: cvl_0_1 00:11:46.101 21:17:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.101 21:17:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:46.101 21:17:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:46.101 21:17:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:46.101 21:17:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.101 21:17:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.101 21:17:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.101 21:17:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:46.101 21:17:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.101 21:17:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.101 21:17:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:46.101 21:17:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.101 21:17:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.101 21:17:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:46.101 21:17:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:46.101 21:17:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.101 21:17:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.101 21:17:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.101 21:17:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.101 21:17:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:46.101 21:17:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.101 21:17:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.101 21:17:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.101 21:17:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:46.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:11:46.101 00:11:46.101 --- 10.0.0.2 ping statistics --- 00:11:46.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.101 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:11:46.101 21:17:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:11:46.101 00:11:46.101 --- 10.0.0.1 ping statistics --- 00:11:46.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.101 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:11:46.101 21:17:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.101 21:17:00 -- nvmf/common.sh@411 -- # return 0 00:11:46.101 21:17:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:46.101 21:17:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.101 21:17:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:46.101 21:17:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.101 21:17:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:46.101 21:17:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:46.101 21:17:00 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:46.101 21:17:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:46.101 21:17:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.101 21:17:00 -- common/autotest_common.sh@10 -- # set +x 00:11:46.101 ************************************ 00:11:46.101 START TEST nvmf_filesystem_no_in_capsule 00:11:46.101 ************************************ 00:11:46.101 21:17:00 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:11:46.101 21:17:00 -- target/filesystem.sh@47 -- # in_capsule=0 00:11:46.101 21:17:00 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:46.101 21:17:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:46.101 21:17:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:46.101 21:17:00 -- common/autotest_common.sh@10 -- # set +x 00:11:46.101 21:17:00 -- nvmf/common.sh@470 -- # nvmfpid=1092999 00:11:46.101 21:17:00 -- nvmf/common.sh@471 -- # waitforlisten 1092999 00:11:46.101 21:17:00 -- common/autotest_common.sh@817 -- # '[' -z 1092999 ']' 00:11:46.101 21:17:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.101 21:17:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:46.101 21:17:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:46.101 21:17:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.101 21:17:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:46.101 21:17:00 -- common/autotest_common.sh@10 -- # set +x 00:11:46.101 [2024-04-24 21:17:00.532954] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:11:46.101 [2024-04-24 21:17:00.533085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.101 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.101 [2024-04-24 21:17:00.674854] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.101 [2024-04-24 21:17:00.775142] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.101 [2024-04-24 21:17:00.775190] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.101 [2024-04-24 21:17:00.775203] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.101 [2024-04-24 21:17:00.775212] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.101 [2024-04-24 21:17:00.775220] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.101 [2024-04-24 21:17:00.775322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.101 [2024-04-24 21:17:00.775383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.101 [2024-04-24 21:17:00.775510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.101 [2024-04-24 21:17:00.775521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.363 21:17:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:46.363 21:17:01 -- common/autotest_common.sh@850 -- # return 0 00:11:46.363 21:17:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:46.363 21:17:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:46.363 21:17:01 -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 21:17:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.363 21:17:01 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:46.363 21:17:01 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:46.363 21:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.363 21:17:01 -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 [2024-04-24 21:17:01.277560] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.363 21:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.363 21:17:01 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:46.363 21:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.363 21:17:01 -- common/autotest_common.sh@10 -- # set +x 00:11:46.623 Malloc1 00:11:46.623 21:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.623 21:17:01 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:46.623 21:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.623 21:17:01 -- common/autotest_common.sh@10 -- # set +x 00:11:46.623 21:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.623 21:17:01 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.623 21:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.623 21:17:01 -- common/autotest_common.sh@10 -- # set +x 00:11:46.623 21:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.624 21:17:01 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.624 21:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.624 21:17:01 -- common/autotest_common.sh@10 -- # set +x 00:11:46.624 [2024-04-24 21:17:01.553238] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.624 21:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.624 21:17:01 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:46.624 21:17:01 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:11:46.624 21:17:01 -- common/autotest_common.sh@1365 -- # local bdev_info 00:11:46.624 21:17:01 -- common/autotest_common.sh@1366 -- # local bs 00:11:46.624 21:17:01 -- common/autotest_common.sh@1367 -- # local nb 00:11:46.624 21:17:01 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:46.624 21:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.624 21:17:01 -- common/autotest_common.sh@10 -- # set +x 00:11:46.624 21:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.624 21:17:01 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:11:46.624 { 00:11:46.624 "name": "Malloc1", 00:11:46.624 "aliases": [ 00:11:46.624 "9e6b4455-dbd8-4ee1-9b86-126fb081bc02" 00:11:46.624 ], 00:11:46.624 "product_name": "Malloc disk", 00:11:46.624 "block_size": 512, 00:11:46.624 "num_blocks": 1048576, 00:11:46.624 "uuid": "9e6b4455-dbd8-4ee1-9b86-126fb081bc02", 00:11:46.624 "assigned_rate_limits": { 00:11:46.624 "rw_ios_per_sec": 0, 00:11:46.624 "rw_mbytes_per_sec": 0, 00:11:46.624 "r_mbytes_per_sec": 0, 00:11:46.624 "w_mbytes_per_sec": 0 00:11:46.624 }, 00:11:46.624 "claimed": true, 00:11:46.624 "claim_type": "exclusive_write", 00:11:46.624 "zoned": false, 00:11:46.624 "supported_io_types": { 00:11:46.624 "read": true, 00:11:46.624 "write": true, 00:11:46.624 "unmap": true, 00:11:46.624 "write_zeroes": true, 00:11:46.624 "flush": true, 00:11:46.624 "reset": true, 00:11:46.624 "compare": false, 00:11:46.624 "compare_and_write": false, 00:11:46.624 "abort": true, 00:11:46.624 "nvme_admin": false, 00:11:46.624 "nvme_io": false 00:11:46.624 }, 00:11:46.624 "memory_domains": [ 00:11:46.624 { 00:11:46.624 "dma_device_id": "system", 00:11:46.624 "dma_device_type": 1 00:11:46.624 }, 00:11:46.624 { 00:11:46.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.624 "dma_device_type": 2 00:11:46.624 } 00:11:46.624 ], 00:11:46.624 "driver_specific": {} 00:11:46.624 } 00:11:46.624 ]' 00:11:46.624 21:17:01 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:11:46.883 21:17:01 -- common/autotest_common.sh@1369 -- # bs=512 00:11:46.883 21:17:01 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:11:46.883 21:17:01 -- common/autotest_common.sh@1370 -- # nb=1048576 00:11:46.883 21:17:01 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:11:46.883 21:17:01 -- common/autotest_common.sh@1374 -- # echo 512 00:11:46.883 21:17:01 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:46.883 21:17:01 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.279 21:17:03 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.279 21:17:03 -- common/autotest_common.sh@1184 -- # local i=0 00:11:48.279 21:17:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.279 21:17:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:48.279 21:17:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:50.830 21:17:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:50.830 21:17:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:50.830 21:17:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.830 21:17:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:50.830 21:17:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.830 21:17:05 -- common/autotest_common.sh@1194 -- # return 0 00:11:50.830 21:17:05 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:50.830 21:17:05 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:50.830 21:17:05 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:50.830 21:17:05 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:50.830 21:17:05 -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:50.830 21:17:05 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:50.830 21:17:05 -- setup/common.sh@80 -- # echo 536870912 00:11:50.830 21:17:05 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:50.830 21:17:05 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:50.830 21:17:05 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:50.830 21:17:05 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:50.830 21:17:05 -- target/filesystem.sh@69 -- # partprobe 00:11:51.402 21:17:06 -- target/filesystem.sh@70 -- # sleep 1 00:11:52.343 21:17:07 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:52.343 21:17:07 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:52.343 21:17:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:52.343 21:17:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.343 21:17:07 -- common/autotest_common.sh@10 -- # set +x 00:11:52.343 ************************************ 00:11:52.343 START TEST filesystem_ext4 00:11:52.343 ************************************ 00:11:52.343 21:17:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:52.343 21:17:07 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:52.343 21:17:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:52.343 21:17:07 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:52.343 21:17:07 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:11:52.343 21:17:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:52.343 21:17:07 -- common/autotest_common.sh@914 -- # local i=0 00:11:52.343 21:17:07 -- common/autotest_common.sh@915 -- # local force 00:11:52.343 21:17:07 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:11:52.343 21:17:07 -- common/autotest_common.sh@918 -- # force=-F 00:11:52.343 21:17:07 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:52.343 mke2fs 1.46.5 (30-Dec-2021) 00:11:52.343 Discarding device blocks: 0/522240 done 00:11:52.343 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:52.343 Filesystem UUID: dcd4d881-3682-45cc-871c-62807a91f0b6 00:11:52.343 Superblock backups stored on blocks: 00:11:52.343 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:52.343 00:11:52.343 Allocating group tables: 0/64 done 00:11:52.343 Writing inode tables: 0/64 done 00:11:52.603 Creating journal (8192 blocks): done 00:11:53.434 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:11:53.434 00:11:53.434 21:17:08 -- common/autotest_common.sh@931 -- # return 0 00:11:53.434 21:17:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:54.377 21:17:09 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:54.377 21:17:09 -- target/filesystem.sh@25 -- # sync 00:11:54.377 21:17:09 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:54.377 21:17:09 -- target/filesystem.sh@27 -- # sync 00:11:54.377 21:17:09 -- target/filesystem.sh@29 -- # i=0 00:11:54.377 21:17:09 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:54.377 21:17:09 -- target/filesystem.sh@37 -- # kill -0 1092999 00:11:54.377 21:17:09 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:54.377 21:17:09 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:54.377 21:17:09 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:54.377 21:17:09 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:54.377 00:11:54.377 real 0m2.035s 00:11:54.377 user 0m0.020s 00:11:54.377 sys 0m0.077s 00:11:54.377 21:17:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:54.377 21:17:09 -- common/autotest_common.sh@10 -- # set +x 00:11:54.377 ************************************ 00:11:54.377 END TEST filesystem_ext4 00:11:54.377 ************************************ 00:11:54.377 21:17:09 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:54.377 21:17:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:54.377 21:17:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.377 21:17:09 -- common/autotest_common.sh@10 -- # set +x 00:11:54.636 ************************************ 00:11:54.636 START TEST filesystem_btrfs 00:11:54.636 ************************************ 00:11:54.636 21:17:09 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:54.636 21:17:09 -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:54.636 21:17:09 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.636 21:17:09 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:54.636 21:17:09 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:11:54.636 21:17:09 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:54.636 21:17:09 -- common/autotest_common.sh@914 -- # local i=0 00:11:54.636 21:17:09 -- common/autotest_common.sh@915 -- # local force 00:11:54.636 21:17:09 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:11:54.636 21:17:09 -- common/autotest_common.sh@920 -- # force=-f 00:11:54.636 21:17:09 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:54.636 btrfs-progs v6.6.2 00:11:54.636 See https://btrfs.readthedocs.io for more information. 00:11:54.636 00:11:54.636 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:54.636 NOTE: several default settings have changed in version 5.15, please make sure 00:11:54.636 this does not affect your deployments: 00:11:54.636 - DUP for metadata (-m dup) 00:11:54.636 - enabled no-holes (-O no-holes) 00:11:54.636 - enabled free-space-tree (-R free-space-tree) 00:11:54.636 00:11:54.636 Label: (null) 00:11:54.636 UUID: f9f08b42-7c65-4bd6-912f-558c01f093de 00:11:54.636 Node size: 16384 00:11:54.636 Sector size: 4096 00:11:54.636 Filesystem size: 510.00MiB 00:11:54.636 Block group profiles: 00:11:54.636 Data: single 8.00MiB 00:11:54.636 Metadata: DUP 32.00MiB 00:11:54.636 System: DUP 8.00MiB 00:11:54.636 SSD detected: yes 00:11:54.636 Zoned device: no 00:11:54.636 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:54.636 Runtime features: free-space-tree 00:11:54.636 Checksum: crc32c 00:11:54.636 Number of devices: 1 00:11:54.636 Devices: 00:11:54.636 ID SIZE PATH 00:11:54.636 1 510.00MiB /dev/nvme0n1p1 00:11:54.636 00:11:54.636 21:17:09 -- common/autotest_common.sh@931 -- # return 0 00:11:54.636 21:17:09 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:55.207 21:17:09 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:55.207 21:17:09 -- target/filesystem.sh@25 -- # sync 00:11:55.207 21:17:09 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:55.207 21:17:09 -- target/filesystem.sh@27 -- # sync 00:11:55.207 21:17:09 -- target/filesystem.sh@29 -- # i=0 00:11:55.207 21:17:09 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:55.207 21:17:09 -- target/filesystem.sh@37 -- # kill -0 1092999 00:11:55.207 21:17:09 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:55.207 21:17:09 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:55.207 21:17:09 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:55.207 21:17:09 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:55.207 00:11:55.207 real 0m0.599s 00:11:55.207 user 0m0.030s 00:11:55.207 sys 0m0.135s 00:11:55.207 21:17:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:55.207 21:17:09 -- common/autotest_common.sh@10 -- # set +x 00:11:55.207 ************************************ 00:11:55.207 END TEST filesystem_btrfs 00:11:55.207 ************************************ 00:11:55.207 21:17:10 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:55.207 21:17:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:55.207 21:17:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:55.207 21:17:10 -- common/autotest_common.sh@10 -- # set +x 00:11:55.207 ************************************ 00:11:55.207 START TEST filesystem_xfs 00:11:55.207 ************************************ 00:11:55.207 21:17:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:11:55.207 21:17:10 -- target/filesystem.sh@18 -- # fstype=xfs 00:11:55.207 21:17:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:55.207 21:17:10 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:55.207 21:17:10 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:11:55.207 21:17:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:55.207 21:17:10 -- common/autotest_common.sh@914 -- # local i=0 00:11:55.207 21:17:10 -- common/autotest_common.sh@915 -- # local force 00:11:55.207 21:17:10 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:11:55.207 21:17:10 -- common/autotest_common.sh@920 -- # force=-f 00:11:55.207 21:17:10 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:55.469 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:55.469 = sectsz=512 attr=2, projid32bit=1 00:11:55.469 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:55.469 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:55.469 data = bsize=4096 blocks=130560, imaxpct=25 00:11:55.469 = sunit=0 swidth=0 blks 00:11:55.469 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:55.469 log =internal log bsize=4096 blocks=16384, version=2 00:11:55.469 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:55.469 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:56.040 Discarding blocks...Done. 00:11:56.040 21:17:10 -- common/autotest_common.sh@931 -- # return 0 00:11:56.040 21:17:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:58.010 21:17:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:58.010 21:17:12 -- target/filesystem.sh@25 -- # sync 00:11:58.010 21:17:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:58.010 21:17:12 -- target/filesystem.sh@27 -- # sync 00:11:58.010 21:17:12 -- target/filesystem.sh@29 -- # i=0 00:11:58.010 21:17:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:58.010 21:17:12 -- target/filesystem.sh@37 -- # kill -0 1092999 00:11:58.010 21:17:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:58.010 21:17:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:58.010 21:17:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:58.010 21:17:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:58.010 00:11:58.010 real 0m2.754s 00:11:58.010 user 0m0.017s 00:11:58.010 sys 0m0.085s 00:11:58.010 21:17:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:58.010 21:17:12 -- common/autotest_common.sh@10 -- # set +x 00:11:58.010 ************************************ 00:11:58.010 END TEST filesystem_xfs 00:11:58.010 ************************************ 00:11:58.010 21:17:12 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:58.271 21:17:12 -- target/filesystem.sh@93 -- # sync 00:11:58.271 21:17:12 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.271 21:17:13 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.271 21:17:13 -- common/autotest_common.sh@1205 -- # local i=0 00:11:58.271 21:17:13 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:58.271 21:17:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.271 21:17:13 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:58.271 21:17:13 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.271 21:17:13 -- common/autotest_common.sh@1217 -- # return 0 00:11:58.271 21:17:13 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.271 21:17:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.271 21:17:13 -- common/autotest_common.sh@10 -- # set +x 00:11:58.271 21:17:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.271 21:17:13 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:58.271 21:17:13 -- target/filesystem.sh@101 -- # killprocess 1092999 00:11:58.271 21:17:13 -- common/autotest_common.sh@936 -- # '[' -z 1092999 ']' 00:11:58.271 21:17:13 -- common/autotest_common.sh@940 -- # kill -0 1092999 00:11:58.271 21:17:13 -- common/autotest_common.sh@941 -- # uname 00:11:58.271 21:17:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:58.271 21:17:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1092999 00:11:58.271 21:17:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:58.271 21:17:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:58.271 21:17:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1092999' 00:11:58.271 killing process with pid 1092999 00:11:58.271 21:17:13 -- common/autotest_common.sh@955 -- # kill 1092999 00:11:58.271 21:17:13 -- common/autotest_common.sh@960 -- # wait 1092999 00:11:59.212 21:17:14 -- target/filesystem.sh@102 -- # nvmfpid= 00:11:59.212 00:11:59.212 real 0m13.678s 00:11:59.212 user 0m52.781s 00:11:59.212 sys 0m1.384s 00:11:59.212 21:17:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:59.212 21:17:14 -- common/autotest_common.sh@10 -- # set +x 00:11:59.212 ************************************ 00:11:59.212 END TEST nvmf_filesystem_no_in_capsule 00:11:59.212 ************************************ 00:11:59.212 21:17:14 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:59.212 21:17:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:59.212 21:17:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:59.212 21:17:14 -- common/autotest_common.sh@10 -- # set +x 00:11:59.473 ************************************ 00:11:59.473 START TEST nvmf_filesystem_in_capsule 00:11:59.473 ************************************ 00:11:59.473 21:17:14 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:11:59.473 21:17:14 -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:59.473 21:17:14 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:59.473 21:17:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:59.473 21:17:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:59.473 21:17:14 -- common/autotest_common.sh@10 -- # set +x 00:11:59.473 21:17:14 -- nvmf/common.sh@470 -- # nvmfpid=1095652 00:11:59.473 21:17:14 -- nvmf/common.sh@471 -- # waitforlisten 1095652 00:11:59.473 21:17:14 -- common/autotest_common.sh@817 -- # '[' -z 1095652 ']' 00:11:59.473 21:17:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.473 21:17:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:59.473 21:17:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.473 21:17:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:59.473 21:17:14 -- common/autotest_common.sh@10 -- # set +x 00:11:59.473 21:17:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.473 [2024-04-24 21:17:14.330122] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:11:59.473 [2024-04-24 21:17:14.330237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.473 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.734 [2024-04-24 21:17:14.460137] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.734 [2024-04-24 21:17:14.558822] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.734 [2024-04-24 21:17:14.558861] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.734 [2024-04-24 21:17:14.558874] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.734 [2024-04-24 21:17:14.558886] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.734 [2024-04-24 21:17:14.558895] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.734 [2024-04-24 21:17:14.558976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.734 [2024-04-24 21:17:14.559108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.734 [2024-04-24 21:17:14.559126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.734 [2024-04-24 21:17:14.559129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.305 21:17:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:00.305 21:17:15 -- common/autotest_common.sh@850 -- # return 0 00:12:00.305 21:17:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:00.305 21:17:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:00.305 21:17:15 -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 21:17:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.305 21:17:15 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:00.305 21:17:15 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:00.305 21:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:00.305 21:17:15 -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 [2024-04-24 21:17:15.087125] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.305 21:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:00.305 21:17:15 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:00.305 21:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:00.305 21:17:15 -- common/autotest_common.sh@10 -- # set +x 00:12:00.566 Malloc1 00:12:00.566 21:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:00.566 21:17:15 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.566 21:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:00.566 21:17:15 -- common/autotest_common.sh@10 -- # set +x 00:12:00.566 21:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:00.566 21:17:15 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.566 21:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:00.566 21:17:15 -- common/autotest_common.sh@10 -- # set +x 00:12:00.566 21:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:00.566 21:17:15 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.566 21:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:00.566 21:17:15 -- common/autotest_common.sh@10 -- # set +x 00:12:00.566 [2024-04-24 21:17:15.349437] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.566 21:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:00.566 21:17:15 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:00.566 21:17:15 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:12:00.566 21:17:15 -- common/autotest_common.sh@1365 -- # local bdev_info 00:12:00.566 21:17:15 -- common/autotest_common.sh@1366 -- # local bs 00:12:00.566 21:17:15 -- common/autotest_common.sh@1367 -- # local nb 00:12:00.566 21:17:15 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:00.566 21:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:00.566 21:17:15 -- common/autotest_common.sh@10 -- # set +x 00:12:00.566 21:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:00.566 21:17:15 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:12:00.566 { 00:12:00.566 "name": "Malloc1", 00:12:00.566 "aliases": [ 00:12:00.566 "18754077-b19f-453c-b30e-c9bfa97e2e91" 00:12:00.566 ], 00:12:00.566 "product_name": "Malloc disk", 00:12:00.566 "block_size": 512, 00:12:00.566 "num_blocks": 1048576, 00:12:00.566 "uuid": "18754077-b19f-453c-b30e-c9bfa97e2e91", 00:12:00.566 "assigned_rate_limits": { 00:12:00.566 "rw_ios_per_sec": 0, 00:12:00.566 "rw_mbytes_per_sec": 0, 00:12:00.566 "r_mbytes_per_sec": 0, 00:12:00.566 "w_mbytes_per_sec": 0 00:12:00.566 }, 00:12:00.566 "claimed": true, 00:12:00.566 "claim_type": "exclusive_write", 00:12:00.566 "zoned": false, 00:12:00.566 "supported_io_types": { 00:12:00.566 "read": true, 00:12:00.566 "write": true, 00:12:00.566 "unmap": true, 00:12:00.566 "write_zeroes": true, 00:12:00.566 "flush": true, 00:12:00.566 "reset": true, 00:12:00.566 "compare": false, 00:12:00.566 "compare_and_write": false, 00:12:00.566 "abort": true, 00:12:00.566 "nvme_admin": false, 00:12:00.566 "nvme_io": false 00:12:00.566 }, 00:12:00.566 "memory_domains": [ 00:12:00.566 { 00:12:00.566 "dma_device_id": "system", 00:12:00.566 "dma_device_type": 1 00:12:00.566 }, 00:12:00.566 { 00:12:00.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.566 "dma_device_type": 2 00:12:00.566 } 00:12:00.566 ], 00:12:00.566 "driver_specific": {} 00:12:00.566 } 00:12:00.566 ]' 00:12:00.566 21:17:15 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:12:00.566 21:17:15 -- common/autotest_common.sh@1369 -- # bs=512 00:12:00.566 21:17:15 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:12:00.566 21:17:15 -- common/autotest_common.sh@1370 -- # nb=1048576 00:12:00.566 21:17:15 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:12:00.566 21:17:15 -- common/autotest_common.sh@1374 -- # echo 512 00:12:00.566 21:17:15 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:00.566 21:17:15 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:02.481 21:17:16 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.481 21:17:16 -- common/autotest_common.sh@1184 -- # local i=0 00:12:02.481 21:17:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.481 21:17:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:02.481 21:17:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:04.388 21:17:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:04.388 21:17:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:04.388 21:17:18 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.388 21:17:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:04.388 21:17:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.388 21:17:19 -- common/autotest_common.sh@1194 -- # return 0 00:12:04.388 21:17:19 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:04.388 21:17:19 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:04.388 21:17:19 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:04.388 21:17:19 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:04.388 21:17:19 -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:04.388 21:17:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:04.388 21:17:19 -- setup/common.sh@80 -- # echo 536870912 00:12:04.388 21:17:19 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:04.388 21:17:19 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:04.388 21:17:19 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:04.388 21:17:19 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:04.646 21:17:19 -- target/filesystem.sh@69 -- # partprobe 00:12:04.905 21:17:19 -- target/filesystem.sh@70 -- # sleep 1 00:12:06.288 21:17:20 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:06.288 21:17:20 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:06.288 21:17:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:06.288 21:17:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:06.288 21:17:20 -- common/autotest_common.sh@10 -- # set +x 00:12:06.288 ************************************ 00:12:06.288 START TEST filesystem_in_capsule_ext4 00:12:06.288 ************************************ 00:12:06.288 21:17:20 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:06.288 21:17:20 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:06.288 21:17:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.288 21:17:20 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:06.288 21:17:20 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:12:06.288 21:17:20 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:12:06.288 21:17:20 -- common/autotest_common.sh@914 -- # local i=0 00:12:06.288 21:17:20 -- common/autotest_common.sh@915 -- # local force 00:12:06.288 21:17:20 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:12:06.288 21:17:20 -- common/autotest_common.sh@918 -- # force=-F 00:12:06.288 21:17:20 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:06.288 mke2fs 1.46.5 (30-Dec-2021) 00:12:06.288 Discarding device blocks: 0/522240 done 00:12:06.288 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:06.288 Filesystem UUID: cfc93168-facc-4541-9a87-9cd4689b7d31 00:12:06.288 Superblock backups stored on blocks: 00:12:06.288 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:06.288 00:12:06.288 Allocating group tables: 0/64 done 00:12:06.288 Writing inode tables: 0/64 done 00:12:09.586 Creating journal (8192 blocks): done 00:12:09.848 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:12:09.848 00:12:09.848 21:17:24 -- common/autotest_common.sh@931 -- # return 0 00:12:09.848 21:17:24 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:10.785 21:17:25 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:10.786 21:17:25 -- target/filesystem.sh@25 -- # sync 00:12:10.786 21:17:25 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:10.786 21:17:25 -- target/filesystem.sh@27 -- # sync 00:12:10.786 21:17:25 -- target/filesystem.sh@29 -- # i=0 00:12:10.786 21:17:25 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:10.786 21:17:25 -- target/filesystem.sh@37 -- # kill -0 1095652 00:12:10.786 21:17:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:10.786 21:17:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:10.786 21:17:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:10.786 21:17:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:10.786 00:12:10.786 real 0m4.699s 00:12:10.786 user 0m0.027s 00:12:10.786 sys 0m0.061s 00:12:10.786 21:17:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:10.786 21:17:25 -- common/autotest_common.sh@10 -- # set +x 00:12:10.786 ************************************ 00:12:10.786 END TEST filesystem_in_capsule_ext4 00:12:10.786 ************************************ 00:12:10.786 21:17:25 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:10.786 21:17:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:10.786 21:17:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:10.786 21:17:25 -- common/autotest_common.sh@10 -- # set +x 00:12:11.046 ************************************ 00:12:11.046 START TEST filesystem_in_capsule_btrfs 00:12:11.046 ************************************ 00:12:11.046 21:17:25 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:11.046 21:17:25 -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:11.046 21:17:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:11.046 21:17:25 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:11.046 21:17:25 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:12:11.046 21:17:25 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:12:11.046 21:17:25 -- common/autotest_common.sh@914 -- # local i=0 00:12:11.046 21:17:25 -- common/autotest_common.sh@915 -- # local force 00:12:11.046 21:17:25 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:12:11.046 21:17:25 -- common/autotest_common.sh@920 -- # force=-f 00:12:11.046 21:17:25 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:11.046 btrfs-progs v6.6.2 00:12:11.046 See https://btrfs.readthedocs.io for more information. 00:12:11.046 00:12:11.046 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:11.046 NOTE: several default settings have changed in version 5.15, please make sure 00:12:11.046 this does not affect your deployments: 00:12:11.046 - DUP for metadata (-m dup) 00:12:11.046 - enabled no-holes (-O no-holes) 00:12:11.046 - enabled free-space-tree (-R free-space-tree) 00:12:11.046 00:12:11.046 Label: (null) 00:12:11.046 UUID: 4b1e20b1-e5a2-4b58-9ac5-59f3ae906eea 00:12:11.046 Node size: 16384 00:12:11.046 Sector size: 4096 00:12:11.046 Filesystem size: 510.00MiB 00:12:11.046 Block group profiles: 00:12:11.046 Data: single 8.00MiB 00:12:11.046 Metadata: DUP 32.00MiB 00:12:11.046 System: DUP 8.00MiB 00:12:11.046 SSD detected: yes 00:12:11.046 Zoned device: no 00:12:11.046 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:11.046 Runtime features: free-space-tree 00:12:11.046 Checksum: crc32c 00:12:11.046 Number of devices: 1 00:12:11.046 Devices: 00:12:11.046 ID SIZE PATH 00:12:11.046 1 510.00MiB /dev/nvme0n1p1 00:12:11.046 00:12:11.046 21:17:25 -- common/autotest_common.sh@931 -- # return 0 00:12:11.046 21:17:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:11.987 21:17:26 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:11.987 21:17:26 -- target/filesystem.sh@25 -- # sync 00:12:11.987 21:17:26 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:11.987 21:17:26 -- target/filesystem.sh@27 -- # sync 00:12:11.987 21:17:26 -- target/filesystem.sh@29 -- # i=0 00:12:11.987 21:17:26 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:11.987 21:17:26 -- target/filesystem.sh@37 -- # kill -0 1095652 00:12:11.987 21:17:26 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:11.987 21:17:26 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:11.987 21:17:26 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:11.987 21:17:26 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:11.987 00:12:11.987 real 0m1.066s 00:12:11.987 user 0m0.016s 00:12:11.987 sys 0m0.143s 00:12:11.987 21:17:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:11.987 21:17:26 -- common/autotest_common.sh@10 -- # set +x 00:12:11.987 ************************************ 00:12:11.987 END TEST filesystem_in_capsule_btrfs 00:12:11.987 ************************************ 00:12:11.987 21:17:26 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:11.987 21:17:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:11.987 21:17:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:11.987 21:17:26 -- common/autotest_common.sh@10 -- # set +x 00:12:12.246 ************************************ 00:12:12.246 START TEST filesystem_in_capsule_xfs 00:12:12.246 ************************************ 00:12:12.246 21:17:27 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:12:12.246 21:17:27 -- target/filesystem.sh@18 -- # fstype=xfs 00:12:12.246 21:17:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:12.246 21:17:27 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:12.246 21:17:27 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:12:12.246 21:17:27 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:12:12.246 21:17:27 -- common/autotest_common.sh@914 -- # local i=0 00:12:12.246 21:17:27 -- common/autotest_common.sh@915 -- # local force 00:12:12.246 21:17:27 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:12:12.246 21:17:27 -- common/autotest_common.sh@920 -- # force=-f 00:12:12.246 21:17:27 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:12.246 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:12.246 = sectsz=512 attr=2, projid32bit=1 00:12:12.246 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:12.246 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:12.246 data = bsize=4096 blocks=130560, imaxpct=25 00:12:12.246 = sunit=0 swidth=0 blks 00:12:12.246 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:12.246 log =internal log bsize=4096 blocks=16384, version=2 00:12:12.246 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:12.246 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:13.186 Discarding blocks...Done. 00:12:13.186 21:17:27 -- common/autotest_common.sh@931 -- # return 0 00:12:13.186 21:17:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:15.095 21:17:29 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:15.095 21:17:29 -- target/filesystem.sh@25 -- # sync 00:12:15.095 21:17:29 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:15.095 21:17:29 -- target/filesystem.sh@27 -- # sync 00:12:15.095 21:17:29 -- target/filesystem.sh@29 -- # i=0 00:12:15.095 21:17:29 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:15.095 21:17:29 -- target/filesystem.sh@37 -- # kill -0 1095652 00:12:15.095 21:17:29 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:15.095 21:17:29 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:15.095 21:17:29 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:15.095 21:17:29 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:15.095 00:12:15.095 real 0m2.653s 00:12:15.095 user 0m0.017s 00:12:15.095 sys 0m0.089s 00:12:15.095 21:17:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:15.095 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:12:15.095 ************************************ 00:12:15.095 END TEST filesystem_in_capsule_xfs 00:12:15.096 ************************************ 00:12:15.096 21:17:29 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:15.096 21:17:29 -- target/filesystem.sh@93 -- # sync 00:12:15.096 21:17:29 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.096 21:17:29 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.096 21:17:29 -- common/autotest_common.sh@1205 -- # local i=0 00:12:15.096 21:17:29 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:15.096 21:17:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.096 21:17:29 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:15.096 21:17:29 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.096 21:17:29 -- common/autotest_common.sh@1217 -- # return 0 00:12:15.096 21:17:29 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.096 21:17:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:15.096 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:12:15.096 21:17:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:15.096 21:17:30 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:15.096 21:17:30 -- target/filesystem.sh@101 -- # killprocess 1095652 00:12:15.096 21:17:30 -- common/autotest_common.sh@936 -- # '[' -z 1095652 ']' 00:12:15.096 21:17:30 -- common/autotest_common.sh@940 -- # kill -0 1095652 00:12:15.096 21:17:30 -- common/autotest_common.sh@941 -- # uname 00:12:15.096 21:17:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:15.096 21:17:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1095652 00:12:15.096 21:17:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:15.096 21:17:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:15.096 21:17:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1095652' 00:12:15.096 killing process with pid 1095652 00:12:15.096 21:17:30 -- common/autotest_common.sh@955 -- # kill 1095652 00:12:15.096 21:17:30 -- common/autotest_common.sh@960 -- # wait 1095652 00:12:16.036 21:17:30 -- target/filesystem.sh@102 -- # nvmfpid= 00:12:16.036 00:12:16.036 real 0m16.754s 00:12:16.036 user 1m5.092s 00:12:16.036 sys 0m1.404s 00:12:16.036 21:17:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:16.036 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:12:16.036 ************************************ 00:12:16.036 END TEST nvmf_filesystem_in_capsule 00:12:16.036 ************************************ 00:12:16.296 21:17:31 -- target/filesystem.sh@108 -- # nvmftestfini 00:12:16.296 21:17:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:16.296 21:17:31 -- nvmf/common.sh@117 -- # sync 00:12:16.296 21:17:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.296 21:17:31 -- nvmf/common.sh@120 -- # set +e 00:12:16.296 21:17:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.296 21:17:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.296 rmmod nvme_tcp 00:12:16.296 rmmod nvme_fabrics 00:12:16.296 rmmod nvme_keyring 00:12:16.296 21:17:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.296 21:17:31 -- nvmf/common.sh@124 -- # set -e 00:12:16.296 21:17:31 -- nvmf/common.sh@125 -- # return 0 00:12:16.296 21:17:31 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:16.296 21:17:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:16.296 21:17:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:16.296 21:17:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:16.296 21:17:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.296 21:17:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.296 21:17:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.296 21:17:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.296 21:17:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.211 21:17:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:18.211 00:12:18.211 real 0m39.075s 00:12:18.211 user 1m59.784s 00:12:18.211 sys 0m7.452s 00:12:18.211 21:17:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:18.211 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:12:18.211 ************************************ 00:12:18.211 END TEST nvmf_filesystem 00:12:18.211 ************************************ 00:12:18.211 21:17:33 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:18.211 21:17:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:18.211 21:17:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:18.211 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:12:18.472 ************************************ 00:12:18.472 START TEST nvmf_discovery 00:12:18.472 ************************************ 00:12:18.472 21:17:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:18.472 * Looking for test storage... 00:12:18.472 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:18.472 21:17:33 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.472 21:17:33 -- nvmf/common.sh@7 -- # uname -s 00:12:18.472 21:17:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.472 21:17:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.472 21:17:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.472 21:17:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.472 21:17:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.472 21:17:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.472 21:17:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.472 21:17:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.472 21:17:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.472 21:17:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.472 21:17:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:12:18.472 21:17:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:12:18.472 21:17:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.472 21:17:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.472 21:17:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:18.472 21:17:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.472 21:17:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:18.472 21:17:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.472 21:17:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.472 21:17:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.472 21:17:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.472 21:17:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.472 21:17:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.472 21:17:33 -- paths/export.sh@5 -- # export PATH 00:12:18.473 21:17:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.473 21:17:33 -- nvmf/common.sh@47 -- # : 0 00:12:18.473 21:17:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:18.473 21:17:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:18.473 21:17:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.473 21:17:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.473 21:17:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.473 21:17:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:18.473 21:17:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:18.473 21:17:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:18.473 21:17:33 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:18.473 21:17:33 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:18.473 21:17:33 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:18.473 21:17:33 -- target/discovery.sh@15 -- # hash nvme 00:12:18.473 21:17:33 -- target/discovery.sh@20 -- # nvmftestinit 00:12:18.473 21:17:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:18.473 21:17:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.473 21:17:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:18.473 21:17:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:18.473 21:17:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:18.473 21:17:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.473 21:17:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.473 21:17:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.473 21:17:33 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:12:18.473 21:17:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:18.473 21:17:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:18.473 21:17:33 -- common/autotest_common.sh@10 -- # set +x 00:12:23.744 21:17:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:23.744 21:17:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:23.744 21:17:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:23.744 21:17:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:23.744 21:17:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:23.744 21:17:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:23.744 21:17:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:23.744 21:17:38 -- nvmf/common.sh@295 -- # net_devs=() 00:12:23.744 21:17:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:23.744 21:17:38 -- nvmf/common.sh@296 -- # e810=() 00:12:23.744 21:17:38 -- nvmf/common.sh@296 -- # local -ga e810 00:12:23.744 21:17:38 -- nvmf/common.sh@297 -- # x722=() 00:12:23.744 21:17:38 -- nvmf/common.sh@297 -- # local -ga x722 00:12:23.744 21:17:38 -- nvmf/common.sh@298 -- # mlx=() 00:12:23.744 21:17:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:23.744 21:17:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.744 21:17:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.744 21:17:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.744 21:17:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.744 21:17:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.744 21:17:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.744 21:17:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.744 21:17:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.744 21:17:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.744 21:17:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.744 21:17:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.744 21:17:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:23.744 21:17:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:23.744 21:17:38 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:12:23.744 21:17:38 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:12:23.744 21:17:38 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:12:23.744 21:17:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:23.744 21:17:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.744 21:17:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:23.744 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:23.744 21:17:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.744 21:17:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.744 21:17:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.744 21:17:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.744 21:17:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.744 21:17:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.744 21:17:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:23.744 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:23.744 21:17:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.744 21:17:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.745 21:17:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.745 21:17:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.745 21:17:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.745 21:17:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:23.745 21:17:38 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:12:23.745 21:17:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.745 21:17:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.745 21:17:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:23.745 21:17:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.745 21:17:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:23.745 Found net devices under 0000:27:00.0: cvl_0_0 00:12:23.745 21:17:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.745 21:17:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.745 21:17:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.745 21:17:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:23.745 21:17:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.745 21:17:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:23.745 Found net devices under 0000:27:00.1: cvl_0_1 00:12:23.745 21:17:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.745 21:17:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:23.745 21:17:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:23.745 21:17:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:23.745 21:17:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:23.745 21:17:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:23.745 21:17:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.745 21:17:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.745 21:17:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.745 21:17:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:23.745 21:17:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.745 21:17:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.745 21:17:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:23.745 21:17:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.745 21:17:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.745 21:17:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:23.745 21:17:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:23.745 21:17:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.745 21:17:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.745 21:17:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.745 21:17:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.745 21:17:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:23.745 21:17:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:24.003 21:17:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:24.003 21:17:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:24.003 21:17:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:24.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:12:24.003 00:12:24.003 --- 10.0.0.2 ping statistics --- 00:12:24.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.003 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:12:24.003 21:17:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:24.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:12:24.003 00:12:24.003 --- 10.0.0.1 ping statistics --- 00:12:24.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.003 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:12:24.003 21:17:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.003 21:17:38 -- nvmf/common.sh@411 -- # return 0 00:12:24.003 21:17:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:24.003 21:17:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.003 21:17:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:24.003 21:17:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:24.003 21:17:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.003 21:17:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:24.003 21:17:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:24.003 21:17:38 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:24.003 21:17:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:24.003 21:17:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:24.003 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.003 21:17:38 -- nvmf/common.sh@470 -- # nvmfpid=1103053 00:12:24.003 21:17:38 -- nvmf/common.sh@471 -- # waitforlisten 1103053 00:12:24.003 21:17:38 -- common/autotest_common.sh@817 -- # '[' -z 1103053 ']' 00:12:24.003 21:17:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.003 21:17:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:24.003 21:17:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.003 21:17:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:24.003 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:12:24.003 21:17:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.003 [2024-04-24 21:17:38.879051] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:12:24.003 [2024-04-24 21:17:38.879154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.003 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.263 [2024-04-24 21:17:38.998234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.263 [2024-04-24 21:17:39.096627] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.263 [2024-04-24 21:17:39.096662] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.263 [2024-04-24 21:17:39.096673] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.263 [2024-04-24 21:17:39.096682] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.263 [2024-04-24 21:17:39.096689] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.263 [2024-04-24 21:17:39.096843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.263 [2024-04-24 21:17:39.096942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.263 [2024-04-24 21:17:39.097041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.263 [2024-04-24 21:17:39.097052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.845 21:17:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:24.845 21:17:39 -- common/autotest_common.sh@850 -- # return 0 00:12:24.845 21:17:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:24.845 21:17:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:24.845 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.846 21:17:39 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 [2024-04-24 21:17:39.632731] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@26 -- # seq 1 4 00:12:24.846 21:17:39 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:24.846 21:17:39 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 Null1 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 [2024-04-24 21:17:39.684940] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:24.846 21:17:39 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 Null2 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:24.846 21:17:39 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 Null3 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:24.846 21:17:39 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 Null4 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:24.846 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.846 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:12:24.846 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.846 21:17:39 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -a 10.0.0.2 -s 4420 00:12:25.110 00:12:25.110 Discovery Log Number of Records 6, Generation counter 6 00:12:25.110 =====Discovery Log Entry 0====== 00:12:25.110 trtype: tcp 00:12:25.110 adrfam: ipv4 00:12:25.110 subtype: current discovery subsystem 00:12:25.110 treq: not required 00:12:25.110 portid: 0 00:12:25.110 trsvcid: 4420 00:12:25.110 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:25.110 traddr: 10.0.0.2 00:12:25.110 eflags: explicit discovery connections, duplicate discovery information 00:12:25.110 sectype: none 00:12:25.110 =====Discovery Log Entry 1====== 00:12:25.110 trtype: tcp 00:12:25.110 adrfam: ipv4 00:12:25.110 subtype: nvme subsystem 00:12:25.110 treq: not required 00:12:25.110 portid: 0 00:12:25.110 trsvcid: 4420 00:12:25.110 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:25.110 traddr: 10.0.0.2 00:12:25.110 eflags: none 00:12:25.110 sectype: none 00:12:25.110 =====Discovery Log Entry 2====== 00:12:25.110 trtype: tcp 00:12:25.110 adrfam: ipv4 00:12:25.110 subtype: nvme subsystem 00:12:25.110 treq: not required 00:12:25.110 portid: 0 00:12:25.110 trsvcid: 4420 00:12:25.110 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:25.110 traddr: 10.0.0.2 00:12:25.110 eflags: none 00:12:25.110 sectype: none 00:12:25.110 =====Discovery Log Entry 3====== 00:12:25.110 trtype: tcp 00:12:25.110 adrfam: ipv4 00:12:25.110 subtype: nvme subsystem 00:12:25.110 treq: not required 00:12:25.110 portid: 0 00:12:25.110 trsvcid: 4420 00:12:25.110 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:25.110 traddr: 10.0.0.2 00:12:25.110 eflags: none 00:12:25.110 sectype: none 00:12:25.110 =====Discovery Log Entry 4====== 00:12:25.110 trtype: tcp 00:12:25.110 adrfam: ipv4 00:12:25.110 subtype: nvme subsystem 00:12:25.110 treq: not required 00:12:25.110 portid: 0 00:12:25.110 trsvcid: 4420 00:12:25.110 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:25.110 traddr: 10.0.0.2 00:12:25.110 eflags: none 00:12:25.111 sectype: none 00:12:25.111 =====Discovery Log Entry 5====== 00:12:25.111 trtype: tcp 00:12:25.111 adrfam: ipv4 00:12:25.111 subtype: discovery subsystem referral 00:12:25.111 treq: not required 00:12:25.111 portid: 0 00:12:25.111 trsvcid: 4430 00:12:25.111 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:25.111 traddr: 10.0.0.2 00:12:25.111 eflags: none 00:12:25.111 sectype: none 00:12:25.111 21:17:40 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:25.111 Perform nvmf subsystem discovery via RPC 00:12:25.111 21:17:40 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:25.111 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.111 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.111 [2024-04-24 21:17:40.009172] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:25.111 [ 00:12:25.111 { 00:12:25.111 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:25.111 "subtype": "Discovery", 00:12:25.111 "listen_addresses": [ 00:12:25.111 { 00:12:25.111 "transport": "TCP", 00:12:25.111 "trtype": "TCP", 00:12:25.111 "adrfam": "IPv4", 00:12:25.111 "traddr": "10.0.0.2", 00:12:25.111 "trsvcid": "4420" 00:12:25.111 } 00:12:25.111 ], 00:12:25.111 "allow_any_host": true, 00:12:25.111 "hosts": [] 00:12:25.111 }, 00:12:25.111 { 00:12:25.111 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:25.111 "subtype": "NVMe", 00:12:25.111 "listen_addresses": [ 00:12:25.111 { 00:12:25.111 "transport": "TCP", 00:12:25.111 "trtype": "TCP", 00:12:25.111 "adrfam": "IPv4", 00:12:25.111 "traddr": "10.0.0.2", 00:12:25.111 "trsvcid": "4420" 00:12:25.111 } 00:12:25.111 ], 00:12:25.111 "allow_any_host": true, 00:12:25.111 "hosts": [], 00:12:25.111 "serial_number": "SPDK00000000000001", 00:12:25.111 "model_number": "SPDK bdev Controller", 00:12:25.111 "max_namespaces": 32, 00:12:25.111 "min_cntlid": 1, 00:12:25.111 "max_cntlid": 65519, 00:12:25.111 "namespaces": [ 00:12:25.111 { 00:12:25.111 "nsid": 1, 00:12:25.111 "bdev_name": "Null1", 00:12:25.111 "name": "Null1", 00:12:25.111 "nguid": "A7063ABC40DA44AAB4F8D8DAA08043E9", 00:12:25.111 "uuid": "a7063abc-40da-44aa-b4f8-d8daa08043e9" 00:12:25.111 } 00:12:25.111 ] 00:12:25.111 }, 00:12:25.111 { 00:12:25.111 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:25.111 "subtype": "NVMe", 00:12:25.111 "listen_addresses": [ 00:12:25.111 { 00:12:25.111 "transport": "TCP", 00:12:25.111 "trtype": "TCP", 00:12:25.111 "adrfam": "IPv4", 00:12:25.111 "traddr": "10.0.0.2", 00:12:25.111 "trsvcid": "4420" 00:12:25.111 } 00:12:25.111 ], 00:12:25.111 "allow_any_host": true, 00:12:25.111 "hosts": [], 00:12:25.111 "serial_number": "SPDK00000000000002", 00:12:25.111 "model_number": "SPDK bdev Controller", 00:12:25.111 "max_namespaces": 32, 00:12:25.111 "min_cntlid": 1, 00:12:25.111 "max_cntlid": 65519, 00:12:25.111 "namespaces": [ 00:12:25.111 { 00:12:25.111 "nsid": 1, 00:12:25.111 "bdev_name": "Null2", 00:12:25.111 "name": "Null2", 00:12:25.111 "nguid": "C190BC7EEDE74F16A6373BC406BD7DC6", 00:12:25.111 "uuid": "c190bc7e-ede7-4f16-a637-3bc406bd7dc6" 00:12:25.111 } 00:12:25.111 ] 00:12:25.111 }, 00:12:25.111 { 00:12:25.111 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:25.111 "subtype": "NVMe", 00:12:25.111 "listen_addresses": [ 00:12:25.111 { 00:12:25.111 "transport": "TCP", 00:12:25.111 "trtype": "TCP", 00:12:25.111 "adrfam": "IPv4", 00:12:25.111 "traddr": "10.0.0.2", 00:12:25.111 "trsvcid": "4420" 00:12:25.111 } 00:12:25.111 ], 00:12:25.111 "allow_any_host": true, 00:12:25.111 "hosts": [], 00:12:25.111 "serial_number": "SPDK00000000000003", 00:12:25.111 "model_number": "SPDK bdev Controller", 00:12:25.111 "max_namespaces": 32, 00:12:25.111 "min_cntlid": 1, 00:12:25.111 "max_cntlid": 65519, 00:12:25.111 "namespaces": [ 00:12:25.111 { 00:12:25.111 "nsid": 1, 00:12:25.111 "bdev_name": "Null3", 00:12:25.111 "name": "Null3", 00:12:25.111 "nguid": "6DF315AFA62A4874A7AB050F98AFB753", 00:12:25.111 "uuid": "6df315af-a62a-4874-a7ab-050f98afb753" 00:12:25.111 } 00:12:25.111 ] 00:12:25.111 }, 00:12:25.111 { 00:12:25.111 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:25.111 "subtype": "NVMe", 00:12:25.111 "listen_addresses": [ 00:12:25.111 { 00:12:25.111 "transport": "TCP", 00:12:25.111 "trtype": "TCP", 00:12:25.111 "adrfam": "IPv4", 00:12:25.111 "traddr": "10.0.0.2", 00:12:25.111 "trsvcid": "4420" 00:12:25.111 } 00:12:25.111 ], 00:12:25.111 "allow_any_host": true, 00:12:25.111 "hosts": [], 00:12:25.111 "serial_number": "SPDK00000000000004", 00:12:25.111 "model_number": "SPDK bdev Controller", 00:12:25.111 "max_namespaces": 32, 00:12:25.111 "min_cntlid": 1, 00:12:25.111 "max_cntlid": 65519, 00:12:25.111 "namespaces": [ 00:12:25.111 { 00:12:25.111 "nsid": 1, 00:12:25.111 "bdev_name": "Null4", 00:12:25.111 "name": "Null4", 00:12:25.111 "nguid": "F24850B8D9CB43ABBC37ABF0F72FD185", 00:12:25.111 "uuid": "f24850b8-d9cb-43ab-bc37-abf0f72fd185" 00:12:25.111 } 00:12:25.111 ] 00:12:25.111 } 00:12:25.111 ] 00:12:25.111 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.111 21:17:40 -- target/discovery.sh@42 -- # seq 1 4 00:12:25.111 21:17:40 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:25.111 21:17:40 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.111 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.111 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.111 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.111 21:17:40 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:25.111 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.111 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.111 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.111 21:17:40 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:25.111 21:17:40 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:25.111 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.111 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.111 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.111 21:17:40 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:25.111 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.111 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.111 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.111 21:17:40 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:25.111 21:17:40 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:25.111 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.111 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.111 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.111 21:17:40 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:25.111 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.111 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.369 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.369 21:17:40 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:25.369 21:17:40 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:25.369 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.369 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.369 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.369 21:17:40 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:25.369 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.369 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.369 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.369 21:17:40 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:25.369 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.369 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.369 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.369 21:17:40 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:25.369 21:17:40 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:25.369 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.369 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.369 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.369 21:17:40 -- target/discovery.sh@49 -- # check_bdevs= 00:12:25.369 21:17:40 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:25.369 21:17:40 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:25.369 21:17:40 -- target/discovery.sh@57 -- # nvmftestfini 00:12:25.369 21:17:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:25.369 21:17:40 -- nvmf/common.sh@117 -- # sync 00:12:25.369 21:17:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:25.369 21:17:40 -- nvmf/common.sh@120 -- # set +e 00:12:25.369 21:17:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:25.369 21:17:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:25.369 rmmod nvme_tcp 00:12:25.369 rmmod nvme_fabrics 00:12:25.369 rmmod nvme_keyring 00:12:25.369 21:17:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:25.369 21:17:40 -- nvmf/common.sh@124 -- # set -e 00:12:25.369 21:17:40 -- nvmf/common.sh@125 -- # return 0 00:12:25.369 21:17:40 -- nvmf/common.sh@478 -- # '[' -n 1103053 ']' 00:12:25.369 21:17:40 -- nvmf/common.sh@479 -- # killprocess 1103053 00:12:25.369 21:17:40 -- common/autotest_common.sh@936 -- # '[' -z 1103053 ']' 00:12:25.369 21:17:40 -- common/autotest_common.sh@940 -- # kill -0 1103053 00:12:25.369 21:17:40 -- common/autotest_common.sh@941 -- # uname 00:12:25.369 21:17:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:25.369 21:17:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1103053 00:12:25.369 21:17:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:25.369 21:17:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:25.369 21:17:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1103053' 00:12:25.369 killing process with pid 1103053 00:12:25.369 21:17:40 -- common/autotest_common.sh@955 -- # kill 1103053 00:12:25.369 [2024-04-24 21:17:40.259501] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:12:25.369 21:17:40 -- common/autotest_common.sh@960 -- # wait 1103053 00:12:25.936 21:17:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:25.936 21:17:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:25.936 21:17:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:25.936 21:17:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:25.936 21:17:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:25.936 21:17:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.936 21:17:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.936 21:17:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.844 21:17:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:27.844 00:12:27.844 real 0m9.508s 00:12:27.844 user 0m7.659s 00:12:27.844 sys 0m4.440s 00:12:27.844 21:17:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:27.844 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:12:27.844 ************************************ 00:12:27.844 END TEST nvmf_discovery 00:12:27.844 ************************************ 00:12:28.103 21:17:42 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:28.103 21:17:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:28.103 21:17:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.103 21:17:42 -- common/autotest_common.sh@10 -- # set +x 00:12:28.103 ************************************ 00:12:28.103 START TEST nvmf_referrals 00:12:28.103 ************************************ 00:12:28.103 21:17:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:28.103 * Looking for test storage... 00:12:28.103 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:28.103 21:17:43 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.103 21:17:43 -- nvmf/common.sh@7 -- # uname -s 00:12:28.103 21:17:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.103 21:17:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.103 21:17:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.103 21:17:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.103 21:17:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.103 21:17:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.103 21:17:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.103 21:17:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.103 21:17:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.103 21:17:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.103 21:17:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:12:28.103 21:17:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:12:28.103 21:17:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.103 21:17:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.103 21:17:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:28.103 21:17:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.103 21:17:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:28.103 21:17:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.103 21:17:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.103 21:17:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.103 21:17:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.103 21:17:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.103 21:17:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.103 21:17:43 -- paths/export.sh@5 -- # export PATH 00:12:28.103 21:17:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.103 21:17:43 -- nvmf/common.sh@47 -- # : 0 00:12:28.103 21:17:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.103 21:17:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.103 21:17:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.103 21:17:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.103 21:17:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.103 21:17:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.103 21:17:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.103 21:17:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.103 21:17:43 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:28.103 21:17:43 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:28.103 21:17:43 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:28.103 21:17:43 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:28.103 21:17:43 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:28.103 21:17:43 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:28.103 21:17:43 -- target/referrals.sh@37 -- # nvmftestinit 00:12:28.103 21:17:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:28.103 21:17:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.103 21:17:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:28.103 21:17:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:28.103 21:17:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:28.103 21:17:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.103 21:17:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.103 21:17:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.103 21:17:43 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:12:28.103 21:17:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:28.103 21:17:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:28.103 21:17:43 -- common/autotest_common.sh@10 -- # set +x 00:12:33.377 21:17:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:33.377 21:17:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.377 21:17:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.377 21:17:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.377 21:17:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.377 21:17:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.377 21:17:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.377 21:17:48 -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.377 21:17:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.377 21:17:48 -- nvmf/common.sh@296 -- # e810=() 00:12:33.377 21:17:48 -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.377 21:17:48 -- nvmf/common.sh@297 -- # x722=() 00:12:33.377 21:17:48 -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.377 21:17:48 -- nvmf/common.sh@298 -- # mlx=() 00:12:33.377 21:17:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.377 21:17:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.377 21:17:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.377 21:17:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.377 21:17:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.377 21:17:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.377 21:17:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.377 21:17:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.377 21:17:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.377 21:17:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.377 21:17:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.377 21:17:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.377 21:17:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.377 21:17:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:33.377 21:17:48 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:12:33.377 21:17:48 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:12:33.377 21:17:48 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:12:33.377 21:17:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.377 21:17:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.377 21:17:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:33.378 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:33.378 21:17:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.378 21:17:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.378 21:17:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.378 21:17:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.378 21:17:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.378 21:17:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.378 21:17:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:33.378 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:33.378 21:17:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.378 21:17:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.378 21:17:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.378 21:17:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.378 21:17:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.378 21:17:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.378 21:17:48 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:12:33.378 21:17:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.378 21:17:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.378 21:17:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:33.378 21:17:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.378 21:17:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:33.378 Found net devices under 0000:27:00.0: cvl_0_0 00:12:33.378 21:17:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.378 21:17:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.378 21:17:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.378 21:17:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:33.378 21:17:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.378 21:17:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:33.378 Found net devices under 0000:27:00.1: cvl_0_1 00:12:33.378 21:17:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.378 21:17:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:33.378 21:17:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:33.378 21:17:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:33.378 21:17:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:33.378 21:17:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:33.378 21:17:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.378 21:17:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.378 21:17:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.378 21:17:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:33.378 21:17:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.378 21:17:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.378 21:17:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:33.378 21:17:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.378 21:17:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.378 21:17:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:33.378 21:17:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:33.378 21:17:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.378 21:17:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.639 21:17:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.640 21:17:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.640 21:17:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:33.640 21:17:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.640 21:17:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.640 21:17:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.640 21:17:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:33.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:12:33.640 00:12:33.640 --- 10.0.0.2 ping statistics --- 00:12:33.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.640 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:12:33.640 21:17:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:12:33.640 00:12:33.640 --- 10.0.0.1 ping statistics --- 00:12:33.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.640 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:12:33.640 21:17:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.640 21:17:48 -- nvmf/common.sh@411 -- # return 0 00:12:33.640 21:17:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:33.640 21:17:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.640 21:17:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:33.640 21:17:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:33.640 21:17:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.640 21:17:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:33.640 21:17:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:33.640 21:17:48 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:33.640 21:17:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:33.640 21:17:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:33.640 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:12:33.640 21:17:48 -- nvmf/common.sh@470 -- # nvmfpid=1107287 00:12:33.640 21:17:48 -- nvmf/common.sh@471 -- # waitforlisten 1107287 00:12:33.640 21:17:48 -- common/autotest_common.sh@817 -- # '[' -z 1107287 ']' 00:12:33.640 21:17:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.640 21:17:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.640 21:17:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:33.640 21:17:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.640 21:17:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:33.640 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:12:33.901 [2024-04-24 21:17:48.610491] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:12:33.901 [2024-04-24 21:17:48.610603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.901 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.901 [2024-04-24 21:17:48.735670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.901 [2024-04-24 21:17:48.830916] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.901 [2024-04-24 21:17:48.830952] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.901 [2024-04-24 21:17:48.830965] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.901 [2024-04-24 21:17:48.830975] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.901 [2024-04-24 21:17:48.830983] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.901 [2024-04-24 21:17:48.831051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.901 [2024-04-24 21:17:48.831081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.901 [2024-04-24 21:17:48.831117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.901 [2024-04-24 21:17:48.831106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.473 21:17:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:34.473 21:17:49 -- common/autotest_common.sh@850 -- # return 0 00:12:34.473 21:17:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:34.473 21:17:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:34.473 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.473 21:17:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.473 21:17:49 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.473 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.473 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.473 [2024-04-24 21:17:49.381154] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.473 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.473 21:17:49 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:34.473 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.473 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.473 [2024-04-24 21:17:49.397376] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:34.473 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.473 21:17:49 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:34.473 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.473 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.473 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.473 21:17:49 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:34.473 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.473 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.473 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.473 21:17:49 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:34.473 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.473 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.473 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.473 21:17:49 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.473 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.473 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.473 21:17:49 -- target/referrals.sh@48 -- # jq length 00:12:34.734 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.734 21:17:49 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:34.734 21:17:49 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:34.734 21:17:49 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:34.734 21:17:49 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.734 21:17:49 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:34.734 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.734 21:17:49 -- target/referrals.sh@21 -- # sort 00:12:34.734 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.734 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.734 21:17:49 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:34.734 21:17:49 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:34.734 21:17:49 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:34.734 21:17:49 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.734 21:17:49 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.734 21:17:49 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.734 21:17:49 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.734 21:17:49 -- target/referrals.sh@26 -- # sort 00:12:34.734 21:17:49 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:34.734 21:17:49 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:34.734 21:17:49 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:34.734 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.734 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.734 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.734 21:17:49 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:34.734 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.734 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.734 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.734 21:17:49 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:34.734 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.734 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.994 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.994 21:17:49 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.994 21:17:49 -- target/referrals.sh@56 -- # jq length 00:12:34.994 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.994 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.994 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.994 21:17:49 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:34.994 21:17:49 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:34.994 21:17:49 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.994 21:17:49 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.994 21:17:49 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.994 21:17:49 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.994 21:17:49 -- target/referrals.sh@26 -- # sort 00:12:34.994 21:17:49 -- target/referrals.sh@26 -- # echo 00:12:34.994 21:17:49 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:34.994 21:17:49 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:34.995 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.995 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.995 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.995 21:17:49 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:34.995 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.995 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.995 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.995 21:17:49 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:34.995 21:17:49 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:34.995 21:17:49 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.995 21:17:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:34.995 21:17:49 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:34.995 21:17:49 -- common/autotest_common.sh@10 -- # set +x 00:12:34.995 21:17:49 -- target/referrals.sh@21 -- # sort 00:12:34.995 21:17:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:34.995 21:17:49 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:34.995 21:17:49 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:34.995 21:17:49 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:34.995 21:17:49 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.995 21:17:49 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.995 21:17:49 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.995 21:17:49 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.995 21:17:49 -- target/referrals.sh@26 -- # sort 00:12:35.255 21:17:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:35.255 21:17:50 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:35.255 21:17:50 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:35.255 21:17:50 -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:35.256 21:17:50 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:35.256 21:17:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.256 21:17:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:35.514 21:17:50 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:35.514 21:17:50 -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:35.515 21:17:50 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:35.515 21:17:50 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:35.515 21:17:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.515 21:17:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:35.515 21:17:50 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:35.515 21:17:50 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:35.515 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:35.515 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:12:35.515 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:35.515 21:17:50 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:35.515 21:17:50 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:35.515 21:17:50 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:35.515 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:35.515 21:17:50 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:35.515 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:12:35.515 21:17:50 -- target/referrals.sh@21 -- # sort 00:12:35.774 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:35.774 21:17:50 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:35.774 21:17:50 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:35.774 21:17:50 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:35.774 21:17:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:35.774 21:17:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:35.774 21:17:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.774 21:17:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:35.774 21:17:50 -- target/referrals.sh@26 -- # sort 00:12:35.774 21:17:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:35.774 21:17:50 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:35.774 21:17:50 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:35.774 21:17:50 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:35.774 21:17:50 -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:35.774 21:17:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.774 21:17:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:36.034 21:17:50 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:36.034 21:17:50 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:36.034 21:17:50 -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:36.034 21:17:50 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:36.034 21:17:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:36.034 21:17:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:36.034 21:17:50 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:36.034 21:17:50 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:36.034 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:36.034 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:12:36.034 21:17:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:36.034 21:17:50 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:36.294 21:17:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:36.294 21:17:50 -- common/autotest_common.sh@10 -- # set +x 00:12:36.294 21:17:50 -- target/referrals.sh@82 -- # jq length 00:12:36.294 21:17:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:36.294 21:17:51 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:36.294 21:17:51 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:36.294 21:17:51 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:36.294 21:17:51 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:36.294 21:17:51 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:36.294 21:17:51 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:36.294 21:17:51 -- target/referrals.sh@26 -- # sort 00:12:36.294 21:17:51 -- target/referrals.sh@26 -- # echo 00:12:36.294 21:17:51 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:36.294 21:17:51 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:36.294 21:17:51 -- target/referrals.sh@86 -- # nvmftestfini 00:12:36.294 21:17:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:36.294 21:17:51 -- nvmf/common.sh@117 -- # sync 00:12:36.294 21:17:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:36.294 21:17:51 -- nvmf/common.sh@120 -- # set +e 00:12:36.294 21:17:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:36.294 21:17:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:36.294 rmmod nvme_tcp 00:12:36.294 rmmod nvme_fabrics 00:12:36.294 rmmod nvme_keyring 00:12:36.614 21:17:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:36.614 21:17:51 -- nvmf/common.sh@124 -- # set -e 00:12:36.614 21:17:51 -- nvmf/common.sh@125 -- # return 0 00:12:36.614 21:17:51 -- nvmf/common.sh@478 -- # '[' -n 1107287 ']' 00:12:36.614 21:17:51 -- nvmf/common.sh@479 -- # killprocess 1107287 00:12:36.614 21:17:51 -- common/autotest_common.sh@936 -- # '[' -z 1107287 ']' 00:12:36.614 21:17:51 -- common/autotest_common.sh@940 -- # kill -0 1107287 00:12:36.614 21:17:51 -- common/autotest_common.sh@941 -- # uname 00:12:36.614 21:17:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:36.614 21:17:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1107287 00:12:36.614 21:17:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:36.614 21:17:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:36.614 21:17:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1107287' 00:12:36.614 killing process with pid 1107287 00:12:36.614 21:17:51 -- common/autotest_common.sh@955 -- # kill 1107287 00:12:36.614 21:17:51 -- common/autotest_common.sh@960 -- # wait 1107287 00:12:36.946 21:17:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:36.946 21:17:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:36.946 21:17:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:36.946 21:17:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.946 21:17:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:36.946 21:17:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.946 21:17:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.946 21:17:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.500 21:17:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:39.500 00:12:39.500 real 0m10.906s 00:12:39.500 user 0m13.502s 00:12:39.500 sys 0m4.883s 00:12:39.500 21:17:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:39.500 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.500 ************************************ 00:12:39.500 END TEST nvmf_referrals 00:12:39.500 ************************************ 00:12:39.500 21:17:53 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:39.500 21:17:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:39.500 21:17:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:39.500 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:12:39.500 ************************************ 00:12:39.500 START TEST nvmf_connect_disconnect 00:12:39.500 ************************************ 00:12:39.500 21:17:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:39.500 * Looking for test storage... 00:12:39.500 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:39.500 21:17:54 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.500 21:17:54 -- nvmf/common.sh@7 -- # uname -s 00:12:39.500 21:17:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.500 21:17:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.500 21:17:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.500 21:17:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.500 21:17:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.500 21:17:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.500 21:17:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.500 21:17:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.500 21:17:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.500 21:17:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.500 21:17:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:12:39.500 21:17:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:12:39.500 21:17:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.500 21:17:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.500 21:17:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:39.500 21:17:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.500 21:17:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:39.500 21:17:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.500 21:17:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.500 21:17:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.500 21:17:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.501 21:17:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.501 21:17:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.501 21:17:54 -- paths/export.sh@5 -- # export PATH 00:12:39.501 21:17:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.501 21:17:54 -- nvmf/common.sh@47 -- # : 0 00:12:39.501 21:17:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.501 21:17:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.501 21:17:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.501 21:17:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.501 21:17:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.501 21:17:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.501 21:17:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.501 21:17:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.501 21:17:54 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:39.501 21:17:54 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:39.501 21:17:54 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:39.501 21:17:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:39.501 21:17:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.501 21:17:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:39.501 21:17:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:39.501 21:17:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:39.501 21:17:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.501 21:17:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.501 21:17:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.501 21:17:54 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:12:39.501 21:17:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:39.501 21:17:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:39.501 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:12:46.084 21:18:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:46.084 21:18:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:46.084 21:18:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:46.084 21:18:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:46.084 21:18:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:46.084 21:18:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:46.084 21:18:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:46.084 21:18:00 -- nvmf/common.sh@295 -- # net_devs=() 00:12:46.084 21:18:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:46.084 21:18:00 -- nvmf/common.sh@296 -- # e810=() 00:12:46.084 21:18:00 -- nvmf/common.sh@296 -- # local -ga e810 00:12:46.084 21:18:00 -- nvmf/common.sh@297 -- # x722=() 00:12:46.084 21:18:00 -- nvmf/common.sh@297 -- # local -ga x722 00:12:46.084 21:18:00 -- nvmf/common.sh@298 -- # mlx=() 00:12:46.084 21:18:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:46.084 21:18:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.084 21:18:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.084 21:18:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.084 21:18:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.084 21:18:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.084 21:18:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.084 21:18:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.084 21:18:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.085 21:18:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.085 21:18:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.085 21:18:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.085 21:18:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:46.085 21:18:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:46.085 21:18:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.085 21:18:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:46.085 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:46.085 21:18:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.085 21:18:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:46.085 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:46.085 21:18:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:46.085 21:18:00 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.085 21:18:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.085 21:18:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:46.085 21:18:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.085 21:18:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:46.085 Found net devices under 0000:27:00.0: cvl_0_0 00:12:46.085 21:18:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.085 21:18:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.085 21:18:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.085 21:18:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:46.085 21:18:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.085 21:18:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:46.085 Found net devices under 0000:27:00.1: cvl_0_1 00:12:46.085 21:18:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.085 21:18:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:46.085 21:18:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:46.085 21:18:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:46.085 21:18:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.085 21:18:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.085 21:18:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.085 21:18:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:46.085 21:18:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.085 21:18:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.085 21:18:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:46.085 21:18:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.085 21:18:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.085 21:18:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:46.085 21:18:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:46.085 21:18:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.085 21:18:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.085 21:18:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.085 21:18:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.085 21:18:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:46.085 21:18:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.085 21:18:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.085 21:18:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.085 21:18:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:46.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:12:46.085 00:12:46.085 --- 10.0.0.2 ping statistics --- 00:12:46.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.085 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:12:46.085 21:18:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.434 ms 00:12:46.085 00:12:46.085 --- 10.0.0.1 ping statistics --- 00:12:46.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.085 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:12:46.085 21:18:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.085 21:18:00 -- nvmf/common.sh@411 -- # return 0 00:12:46.085 21:18:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:46.085 21:18:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.085 21:18:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:46.085 21:18:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.085 21:18:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:46.085 21:18:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:46.085 21:18:00 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:46.085 21:18:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:46.085 21:18:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:46.085 21:18:00 -- common/autotest_common.sh@10 -- # set +x 00:12:46.085 21:18:00 -- nvmf/common.sh@470 -- # nvmfpid=1112118 00:12:46.085 21:18:00 -- nvmf/common.sh@471 -- # waitforlisten 1112118 00:12:46.085 21:18:00 -- common/autotest_common.sh@817 -- # '[' -z 1112118 ']' 00:12:46.085 21:18:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.085 21:18:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:46.085 21:18:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.085 21:18:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:46.085 21:18:00 -- common/autotest_common.sh@10 -- # set +x 00:12:46.085 21:18:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.085 [2024-04-24 21:18:00.427248] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:12:46.085 [2024-04-24 21:18:00.427389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.085 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.085 [2024-04-24 21:18:00.566933] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.085 [2024-04-24 21:18:00.661986] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.085 [2024-04-24 21:18:00.662033] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.085 [2024-04-24 21:18:00.662046] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.085 [2024-04-24 21:18:00.662054] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.085 [2024-04-24 21:18:00.662062] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.085 [2024-04-24 21:18:00.662237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.085 [2024-04-24 21:18:00.662337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.085 [2024-04-24 21:18:00.662385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.085 [2024-04-24 21:18:00.662395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.346 21:18:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:46.346 21:18:01 -- common/autotest_common.sh@850 -- # return 0 00:12:46.346 21:18:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:46.346 21:18:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:46.346 21:18:01 -- common/autotest_common.sh@10 -- # set +x 00:12:46.346 21:18:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.346 21:18:01 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:46.346 21:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:46.346 21:18:01 -- common/autotest_common.sh@10 -- # set +x 00:12:46.346 [2024-04-24 21:18:01.172752] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.346 21:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:46.346 21:18:01 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:46.346 21:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:46.346 21:18:01 -- common/autotest_common.sh@10 -- # set +x 00:12:46.346 21:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:46.346 21:18:01 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:46.346 21:18:01 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:46.346 21:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:46.346 21:18:01 -- common/autotest_common.sh@10 -- # set +x 00:12:46.346 21:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:46.346 21:18:01 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:46.346 21:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:46.346 21:18:01 -- common/autotest_common.sh@10 -- # set +x 00:12:46.346 21:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:46.346 21:18:01 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.346 21:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:46.346 21:18:01 -- common/autotest_common.sh@10 -- # set +x 00:12:46.346 [2024-04-24 21:18:01.241534] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.346 21:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:46.346 21:18:01 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:46.346 21:18:01 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:46.346 21:18:01 -- target/connect_disconnect.sh@34 -- # set +x 00:12:50.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.641 21:18:19 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:04.641 21:18:19 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:04.641 21:18:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:04.641 21:18:19 -- nvmf/common.sh@117 -- # sync 00:13:04.641 21:18:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.641 21:18:19 -- nvmf/common.sh@120 -- # set +e 00:13:04.641 21:18:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.641 21:18:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.641 rmmod nvme_tcp 00:13:04.641 rmmod nvme_fabrics 00:13:04.641 rmmod nvme_keyring 00:13:04.641 21:18:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.641 21:18:19 -- nvmf/common.sh@124 -- # set -e 00:13:04.641 21:18:19 -- nvmf/common.sh@125 -- # return 0 00:13:04.641 21:18:19 -- nvmf/common.sh@478 -- # '[' -n 1112118 ']' 00:13:04.641 21:18:19 -- nvmf/common.sh@479 -- # killprocess 1112118 00:13:04.641 21:18:19 -- common/autotest_common.sh@936 -- # '[' -z 1112118 ']' 00:13:04.641 21:18:19 -- common/autotest_common.sh@940 -- # kill -0 1112118 00:13:04.641 21:18:19 -- common/autotest_common.sh@941 -- # uname 00:13:04.641 21:18:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:04.641 21:18:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1112118 00:13:04.641 21:18:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:04.641 21:18:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:04.641 21:18:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1112118' 00:13:04.641 killing process with pid 1112118 00:13:04.641 21:18:19 -- common/autotest_common.sh@955 -- # kill 1112118 00:13:04.641 21:18:19 -- common/autotest_common.sh@960 -- # wait 1112118 00:13:05.210 21:18:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:05.210 21:18:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:05.210 21:18:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:05.210 21:18:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.210 21:18:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:05.210 21:18:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.210 21:18:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.210 21:18:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.120 21:18:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:07.120 00:13:07.120 real 0m27.969s 00:13:07.120 user 1m17.629s 00:13:07.120 sys 0m5.972s 00:13:07.120 21:18:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:07.120 21:18:21 -- common/autotest_common.sh@10 -- # set +x 00:13:07.120 ************************************ 00:13:07.120 END TEST nvmf_connect_disconnect 00:13:07.120 ************************************ 00:13:07.120 21:18:21 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:07.120 21:18:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:07.120 21:18:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:07.120 21:18:21 -- common/autotest_common.sh@10 -- # set +x 00:13:07.380 ************************************ 00:13:07.380 START TEST nvmf_multitarget 00:13:07.380 ************************************ 00:13:07.380 21:18:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:07.380 * Looking for test storage... 00:13:07.380 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:07.380 21:18:22 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.380 21:18:22 -- nvmf/common.sh@7 -- # uname -s 00:13:07.380 21:18:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.380 21:18:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.380 21:18:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.380 21:18:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.380 21:18:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.380 21:18:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.380 21:18:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.380 21:18:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.380 21:18:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.380 21:18:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.380 21:18:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:13:07.380 21:18:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:13:07.380 21:18:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.380 21:18:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.380 21:18:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:07.380 21:18:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.380 21:18:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:07.380 21:18:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.380 21:18:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.380 21:18:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.380 21:18:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.380 21:18:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.380 21:18:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.380 21:18:22 -- paths/export.sh@5 -- # export PATH 00:13:07.380 21:18:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.380 21:18:22 -- nvmf/common.sh@47 -- # : 0 00:13:07.380 21:18:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.380 21:18:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.380 21:18:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.380 21:18:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.380 21:18:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.380 21:18:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.380 21:18:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.380 21:18:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.380 21:18:22 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:07.380 21:18:22 -- target/multitarget.sh@15 -- # nvmftestinit 00:13:07.380 21:18:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:07.380 21:18:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.380 21:18:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:07.380 21:18:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:07.380 21:18:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:07.380 21:18:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.380 21:18:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.380 21:18:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.380 21:18:22 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:13:07.380 21:18:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:07.380 21:18:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:07.380 21:18:22 -- common/autotest_common.sh@10 -- # set +x 00:13:13.960 21:18:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:13.960 21:18:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:13.960 21:18:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:13.960 21:18:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:13.960 21:18:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:13.960 21:18:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:13.960 21:18:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:13.960 21:18:27 -- nvmf/common.sh@295 -- # net_devs=() 00:13:13.960 21:18:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:13.960 21:18:27 -- nvmf/common.sh@296 -- # e810=() 00:13:13.960 21:18:27 -- nvmf/common.sh@296 -- # local -ga e810 00:13:13.960 21:18:27 -- nvmf/common.sh@297 -- # x722=() 00:13:13.960 21:18:27 -- nvmf/common.sh@297 -- # local -ga x722 00:13:13.960 21:18:27 -- nvmf/common.sh@298 -- # mlx=() 00:13:13.960 21:18:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:13.960 21:18:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.960 21:18:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.960 21:18:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.960 21:18:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.960 21:18:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.960 21:18:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.960 21:18:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.960 21:18:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.961 21:18:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.961 21:18:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.961 21:18:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.961 21:18:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:13.961 21:18:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:13.961 21:18:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.961 21:18:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:13.961 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:13.961 21:18:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.961 21:18:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:13.961 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:13.961 21:18:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:13.961 21:18:27 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.961 21:18:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.961 21:18:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:13.961 21:18:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.961 21:18:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:13.961 Found net devices under 0000:27:00.0: cvl_0_0 00:13:13.961 21:18:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.961 21:18:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.961 21:18:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.961 21:18:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:13.961 21:18:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.961 21:18:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:13.961 Found net devices under 0000:27:00.1: cvl_0_1 00:13:13.961 21:18:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.961 21:18:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:13.961 21:18:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:13.961 21:18:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:13.961 21:18:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:13.961 21:18:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.961 21:18:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.961 21:18:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.961 21:18:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:13.961 21:18:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.961 21:18:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.961 21:18:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:13.961 21:18:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.961 21:18:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.961 21:18:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:13.961 21:18:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:13.961 21:18:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.961 21:18:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.961 21:18:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.961 21:18:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.961 21:18:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:13.961 21:18:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.961 21:18:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.961 21:18:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.961 21:18:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:13.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:13:13.961 00:13:13.961 --- 10.0.0.2 ping statistics --- 00:13:13.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.961 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:13:13.961 21:18:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:13:13.961 00:13:13.961 --- 10.0.0.1 ping statistics --- 00:13:13.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.961 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:13:13.961 21:18:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.961 21:18:28 -- nvmf/common.sh@411 -- # return 0 00:13:13.961 21:18:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:13.961 21:18:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.961 21:18:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:13.961 21:18:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:13.961 21:18:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.961 21:18:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:13.961 21:18:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:13.961 21:18:28 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:13.961 21:18:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:13.961 21:18:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:13.961 21:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:13.961 21:18:28 -- nvmf/common.sh@470 -- # nvmfpid=1120483 00:13:13.961 21:18:28 -- nvmf/common.sh@471 -- # waitforlisten 1120483 00:13:13.961 21:18:28 -- common/autotest_common.sh@817 -- # '[' -z 1120483 ']' 00:13:13.961 21:18:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:13.961 21:18:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.961 21:18:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:13.961 21:18:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.961 21:18:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:13.961 21:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:13.961 [2024-04-24 21:18:28.304759] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:13:13.961 [2024-04-24 21:18:28.304866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.961 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.961 [2024-04-24 21:18:28.427874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:13.961 [2024-04-24 21:18:28.527233] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.961 [2024-04-24 21:18:28.527276] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.961 [2024-04-24 21:18:28.527288] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.961 [2024-04-24 21:18:28.527297] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.961 [2024-04-24 21:18:28.527304] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.961 [2024-04-24 21:18:28.527428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.961 [2024-04-24 21:18:28.527528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.961 [2024-04-24 21:18:28.527554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.961 [2024-04-24 21:18:28.527564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.221 21:18:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:14.221 21:18:29 -- common/autotest_common.sh@850 -- # return 0 00:13:14.221 21:18:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:14.221 21:18:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:14.221 21:18:29 -- common/autotest_common.sh@10 -- # set +x 00:13:14.221 21:18:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.221 21:18:29 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:14.221 21:18:29 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:14.221 21:18:29 -- target/multitarget.sh@21 -- # jq length 00:13:14.221 21:18:29 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:14.221 21:18:29 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:14.481 "nvmf_tgt_1" 00:13:14.481 21:18:29 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:14.481 "nvmf_tgt_2" 00:13:14.481 21:18:29 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:14.481 21:18:29 -- target/multitarget.sh@28 -- # jq length 00:13:14.481 21:18:29 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:14.481 21:18:29 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:14.741 true 00:13:14.741 21:18:29 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:14.741 true 00:13:14.741 21:18:29 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:14.741 21:18:29 -- target/multitarget.sh@35 -- # jq length 00:13:14.741 21:18:29 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:14.741 21:18:29 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:14.741 21:18:29 -- target/multitarget.sh@41 -- # nvmftestfini 00:13:14.741 21:18:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:14.741 21:18:29 -- nvmf/common.sh@117 -- # sync 00:13:14.741 21:18:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.741 21:18:29 -- nvmf/common.sh@120 -- # set +e 00:13:14.741 21:18:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.741 21:18:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.741 rmmod nvme_tcp 00:13:14.741 rmmod nvme_fabrics 00:13:15.001 rmmod nvme_keyring 00:13:15.001 21:18:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.001 21:18:29 -- nvmf/common.sh@124 -- # set -e 00:13:15.001 21:18:29 -- nvmf/common.sh@125 -- # return 0 00:13:15.001 21:18:29 -- nvmf/common.sh@478 -- # '[' -n 1120483 ']' 00:13:15.001 21:18:29 -- nvmf/common.sh@479 -- # killprocess 1120483 00:13:15.001 21:18:29 -- common/autotest_common.sh@936 -- # '[' -z 1120483 ']' 00:13:15.001 21:18:29 -- common/autotest_common.sh@940 -- # kill -0 1120483 00:13:15.001 21:18:29 -- common/autotest_common.sh@941 -- # uname 00:13:15.001 21:18:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:15.001 21:18:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1120483 00:13:15.001 21:18:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:15.001 21:18:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:15.001 21:18:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1120483' 00:13:15.001 killing process with pid 1120483 00:13:15.001 21:18:29 -- common/autotest_common.sh@955 -- # kill 1120483 00:13:15.001 21:18:29 -- common/autotest_common.sh@960 -- # wait 1120483 00:13:15.571 21:18:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:15.571 21:18:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:15.571 21:18:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:15.571 21:18:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.571 21:18:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:15.571 21:18:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.571 21:18:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.571 21:18:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.484 21:18:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:17.484 00:13:17.484 real 0m10.247s 00:13:17.484 user 0m8.847s 00:13:17.484 sys 0m5.035s 00:13:17.484 21:18:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:17.484 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:13:17.484 ************************************ 00:13:17.484 END TEST nvmf_multitarget 00:13:17.484 ************************************ 00:13:17.484 21:18:32 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:17.484 21:18:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:17.484 21:18:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:17.484 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:13:17.745 ************************************ 00:13:17.745 START TEST nvmf_rpc 00:13:17.745 ************************************ 00:13:17.745 21:18:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:17.745 * Looking for test storage... 00:13:17.745 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:17.745 21:18:32 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.745 21:18:32 -- nvmf/common.sh@7 -- # uname -s 00:13:17.745 21:18:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.745 21:18:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.745 21:18:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.745 21:18:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.745 21:18:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.745 21:18:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.745 21:18:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.745 21:18:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.745 21:18:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.745 21:18:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.745 21:18:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:13:17.745 21:18:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:13:17.745 21:18:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.745 21:18:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.745 21:18:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:17.745 21:18:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.745 21:18:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:17.745 21:18:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.745 21:18:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.745 21:18:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.745 21:18:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.745 21:18:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.745 21:18:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.745 21:18:32 -- paths/export.sh@5 -- # export PATH 00:13:17.745 21:18:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.745 21:18:32 -- nvmf/common.sh@47 -- # : 0 00:13:17.745 21:18:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.745 21:18:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.745 21:18:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.745 21:18:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.745 21:18:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.745 21:18:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.745 21:18:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.745 21:18:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.745 21:18:32 -- target/rpc.sh@11 -- # loops=5 00:13:17.745 21:18:32 -- target/rpc.sh@23 -- # nvmftestinit 00:13:17.745 21:18:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:17.745 21:18:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.745 21:18:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:17.745 21:18:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:17.745 21:18:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:17.745 21:18:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.745 21:18:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.745 21:18:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.745 21:18:32 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:13:17.745 21:18:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:17.745 21:18:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.745 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:13:23.043 21:18:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:23.043 21:18:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:23.043 21:18:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:23.043 21:18:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:23.043 21:18:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:23.043 21:18:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:23.043 21:18:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:23.043 21:18:37 -- nvmf/common.sh@295 -- # net_devs=() 00:13:23.043 21:18:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:23.043 21:18:37 -- nvmf/common.sh@296 -- # e810=() 00:13:23.043 21:18:37 -- nvmf/common.sh@296 -- # local -ga e810 00:13:23.043 21:18:37 -- nvmf/common.sh@297 -- # x722=() 00:13:23.043 21:18:37 -- nvmf/common.sh@297 -- # local -ga x722 00:13:23.043 21:18:37 -- nvmf/common.sh@298 -- # mlx=() 00:13:23.043 21:18:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:23.043 21:18:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.043 21:18:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.043 21:18:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.043 21:18:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.043 21:18:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.043 21:18:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.043 21:18:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.043 21:18:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.043 21:18:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.043 21:18:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.043 21:18:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.043 21:18:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:23.043 21:18:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:23.043 21:18:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.043 21:18:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:23.043 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:23.043 21:18:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.043 21:18:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:23.043 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:23.043 21:18:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:23.043 21:18:37 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.043 21:18:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.043 21:18:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:23.043 21:18:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.043 21:18:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:23.043 Found net devices under 0000:27:00.0: cvl_0_0 00:13:23.043 21:18:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.043 21:18:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.043 21:18:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.043 21:18:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:23.043 21:18:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.043 21:18:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:23.043 Found net devices under 0000:27:00.1: cvl_0_1 00:13:23.043 21:18:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.043 21:18:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:23.043 21:18:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:23.043 21:18:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:23.043 21:18:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:23.043 21:18:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.043 21:18:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.043 21:18:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.043 21:18:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:23.043 21:18:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.043 21:18:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.043 21:18:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:23.043 21:18:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.043 21:18:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.043 21:18:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:23.043 21:18:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:23.043 21:18:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.044 21:18:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.044 21:18:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.044 21:18:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.044 21:18:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:23.044 21:18:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.044 21:18:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.044 21:18:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.044 21:18:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:23.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:13:23.044 00:13:23.044 --- 10.0.0.2 ping statistics --- 00:13:23.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.044 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:13:23.044 21:18:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:13:23.044 00:13:23.044 --- 10.0.0.1 ping statistics --- 00:13:23.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.044 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:13:23.044 21:18:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.044 21:18:37 -- nvmf/common.sh@411 -- # return 0 00:13:23.044 21:18:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:23.044 21:18:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.044 21:18:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:23.044 21:18:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:23.044 21:18:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.044 21:18:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:23.044 21:18:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:23.305 21:18:38 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:23.305 21:18:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:23.305 21:18:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:23.305 21:18:38 -- common/autotest_common.sh@10 -- # set +x 00:13:23.305 21:18:38 -- nvmf/common.sh@470 -- # nvmfpid=1124795 00:13:23.305 21:18:38 -- nvmf/common.sh@471 -- # waitforlisten 1124795 00:13:23.305 21:18:38 -- common/autotest_common.sh@817 -- # '[' -z 1124795 ']' 00:13:23.305 21:18:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.305 21:18:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:23.305 21:18:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.305 21:18:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:23.305 21:18:38 -- common/autotest_common.sh@10 -- # set +x 00:13:23.305 21:18:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:23.305 [2024-04-24 21:18:38.080724] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:13:23.305 [2024-04-24 21:18:38.080797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.305 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.305 [2024-04-24 21:18:38.174302] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.305 [2024-04-24 21:18:38.267888] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.305 [2024-04-24 21:18:38.267925] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.305 [2024-04-24 21:18:38.267937] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.305 [2024-04-24 21:18:38.267946] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.305 [2024-04-24 21:18:38.267956] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.305 [2024-04-24 21:18:38.268036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.305 [2024-04-24 21:18:38.268063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.305 [2024-04-24 21:18:38.268085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.305 [2024-04-24 21:18:38.268098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.875 21:18:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:23.875 21:18:38 -- common/autotest_common.sh@850 -- # return 0 00:13:23.875 21:18:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:23.875 21:18:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:23.875 21:18:38 -- common/autotest_common.sh@10 -- # set +x 00:13:24.136 21:18:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.136 21:18:38 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:24.136 21:18:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.136 21:18:38 -- common/autotest_common.sh@10 -- # set +x 00:13:24.136 21:18:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.136 21:18:38 -- target/rpc.sh@26 -- # stats='{ 00:13:24.136 "tick_rate": 1900000000, 00:13:24.136 "poll_groups": [ 00:13:24.136 { 00:13:24.136 "name": "nvmf_tgt_poll_group_0", 00:13:24.136 "admin_qpairs": 0, 00:13:24.136 "io_qpairs": 0, 00:13:24.136 "current_admin_qpairs": 0, 00:13:24.136 "current_io_qpairs": 0, 00:13:24.136 "pending_bdev_io": 0, 00:13:24.136 "completed_nvme_io": 0, 00:13:24.136 "transports": [] 00:13:24.136 }, 00:13:24.136 { 00:13:24.136 "name": "nvmf_tgt_poll_group_1", 00:13:24.136 "admin_qpairs": 0, 00:13:24.136 "io_qpairs": 0, 00:13:24.136 "current_admin_qpairs": 0, 00:13:24.136 "current_io_qpairs": 0, 00:13:24.136 "pending_bdev_io": 0, 00:13:24.136 "completed_nvme_io": 0, 00:13:24.136 "transports": [] 00:13:24.136 }, 00:13:24.136 { 00:13:24.136 "name": "nvmf_tgt_poll_group_2", 00:13:24.136 "admin_qpairs": 0, 00:13:24.136 "io_qpairs": 0, 00:13:24.136 "current_admin_qpairs": 0, 00:13:24.136 "current_io_qpairs": 0, 00:13:24.136 "pending_bdev_io": 0, 00:13:24.136 "completed_nvme_io": 0, 00:13:24.137 "transports": [] 00:13:24.137 }, 00:13:24.137 { 00:13:24.137 "name": "nvmf_tgt_poll_group_3", 00:13:24.137 "admin_qpairs": 0, 00:13:24.137 "io_qpairs": 0, 00:13:24.137 "current_admin_qpairs": 0, 00:13:24.137 "current_io_qpairs": 0, 00:13:24.137 "pending_bdev_io": 0, 00:13:24.137 "completed_nvme_io": 0, 00:13:24.137 "transports": [] 00:13:24.137 } 00:13:24.137 ] 00:13:24.137 }' 00:13:24.137 21:18:38 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:24.137 21:18:38 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:24.137 21:18:38 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:24.137 21:18:38 -- target/rpc.sh@15 -- # wc -l 00:13:24.137 21:18:38 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:24.137 21:18:38 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:24.137 21:18:38 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:24.137 21:18:38 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:24.137 21:18:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.137 21:18:38 -- common/autotest_common.sh@10 -- # set +x 00:13:24.137 [2024-04-24 21:18:38.952227] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.137 21:18:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.137 21:18:38 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:24.137 21:18:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.137 21:18:38 -- common/autotest_common.sh@10 -- # set +x 00:13:24.137 21:18:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.137 21:18:38 -- target/rpc.sh@33 -- # stats='{ 00:13:24.137 "tick_rate": 1900000000, 00:13:24.137 "poll_groups": [ 00:13:24.137 { 00:13:24.137 "name": "nvmf_tgt_poll_group_0", 00:13:24.137 "admin_qpairs": 0, 00:13:24.137 "io_qpairs": 0, 00:13:24.137 "current_admin_qpairs": 0, 00:13:24.137 "current_io_qpairs": 0, 00:13:24.137 "pending_bdev_io": 0, 00:13:24.137 "completed_nvme_io": 0, 00:13:24.137 "transports": [ 00:13:24.137 { 00:13:24.137 "trtype": "TCP" 00:13:24.137 } 00:13:24.137 ] 00:13:24.137 }, 00:13:24.137 { 00:13:24.137 "name": "nvmf_tgt_poll_group_1", 00:13:24.137 "admin_qpairs": 0, 00:13:24.137 "io_qpairs": 0, 00:13:24.137 "current_admin_qpairs": 0, 00:13:24.137 "current_io_qpairs": 0, 00:13:24.137 "pending_bdev_io": 0, 00:13:24.137 "completed_nvme_io": 0, 00:13:24.137 "transports": [ 00:13:24.137 { 00:13:24.137 "trtype": "TCP" 00:13:24.137 } 00:13:24.137 ] 00:13:24.137 }, 00:13:24.137 { 00:13:24.137 "name": "nvmf_tgt_poll_group_2", 00:13:24.137 "admin_qpairs": 0, 00:13:24.137 "io_qpairs": 0, 00:13:24.137 "current_admin_qpairs": 0, 00:13:24.137 "current_io_qpairs": 0, 00:13:24.137 "pending_bdev_io": 0, 00:13:24.137 "completed_nvme_io": 0, 00:13:24.137 "transports": [ 00:13:24.137 { 00:13:24.137 "trtype": "TCP" 00:13:24.137 } 00:13:24.137 ] 00:13:24.137 }, 00:13:24.137 { 00:13:24.137 "name": "nvmf_tgt_poll_group_3", 00:13:24.137 "admin_qpairs": 0, 00:13:24.137 "io_qpairs": 0, 00:13:24.137 "current_admin_qpairs": 0, 00:13:24.137 "current_io_qpairs": 0, 00:13:24.137 "pending_bdev_io": 0, 00:13:24.137 "completed_nvme_io": 0, 00:13:24.137 "transports": [ 00:13:24.137 { 00:13:24.137 "trtype": "TCP" 00:13:24.137 } 00:13:24.137 ] 00:13:24.137 } 00:13:24.137 ] 00:13:24.137 }' 00:13:24.137 21:18:38 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:24.137 21:18:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:24.137 21:18:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:24.137 21:18:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:24.137 21:18:39 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:24.137 21:18:39 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:24.137 21:18:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:24.137 21:18:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:24.137 21:18:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:24.137 21:18:39 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:24.137 21:18:39 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:24.137 21:18:39 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:24.137 21:18:39 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:24.137 21:18:39 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:24.137 21:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.137 21:18:39 -- common/autotest_common.sh@10 -- # set +x 00:13:24.137 Malloc1 00:13:24.137 21:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.137 21:18:39 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.137 21:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.137 21:18:39 -- common/autotest_common.sh@10 -- # set +x 00:13:24.397 21:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.397 21:18:39 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.397 21:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.397 21:18:39 -- common/autotest_common.sh@10 -- # set +x 00:13:24.397 21:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.397 21:18:39 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:24.397 21:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.397 21:18:39 -- common/autotest_common.sh@10 -- # set +x 00:13:24.397 21:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.397 21:18:39 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.398 21:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.398 21:18:39 -- common/autotest_common.sh@10 -- # set +x 00:13:24.398 [2024-04-24 21:18:39.121858] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.398 21:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.398 21:18:39 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 -a 10.0.0.2 -s 4420 00:13:24.398 21:18:39 -- common/autotest_common.sh@638 -- # local es=0 00:13:24.398 21:18:39 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 -a 10.0.0.2 -s 4420 00:13:24.398 21:18:39 -- common/autotest_common.sh@626 -- # local arg=nvme 00:13:24.398 21:18:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:24.398 21:18:39 -- common/autotest_common.sh@630 -- # type -t nvme 00:13:24.398 21:18:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:24.398 21:18:39 -- common/autotest_common.sh@632 -- # type -P nvme 00:13:24.398 21:18:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:24.398 21:18:39 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:13:24.398 21:18:39 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:13:24.398 21:18:39 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 -a 10.0.0.2 -s 4420 00:13:24.398 [2024-04-24 21:18:39.151006] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2' 00:13:24.398 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:24.398 could not add new controller: failed to write to nvme-fabrics device 00:13:24.398 21:18:39 -- common/autotest_common.sh@641 -- # es=1 00:13:24.398 21:18:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:24.398 21:18:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:24.398 21:18:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:24.398 21:18:39 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:13:24.398 21:18:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:24.398 21:18:39 -- common/autotest_common.sh@10 -- # set +x 00:13:24.398 21:18:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:24.398 21:18:39 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.778 21:18:40 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:25.778 21:18:40 -- common/autotest_common.sh@1184 -- # local i=0 00:13:25.778 21:18:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.778 21:18:40 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:25.778 21:18:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:28.395 21:18:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:28.395 21:18:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:28.395 21:18:42 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.395 21:18:42 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:28.395 21:18:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.395 21:18:42 -- common/autotest_common.sh@1194 -- # return 0 00:13:28.395 21:18:42 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.395 21:18:42 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:28.395 21:18:42 -- common/autotest_common.sh@1205 -- # local i=0 00:13:28.395 21:18:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:28.395 21:18:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.395 21:18:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:28.395 21:18:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.395 21:18:42 -- common/autotest_common.sh@1217 -- # return 0 00:13:28.395 21:18:42 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:13:28.395 21:18:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.395 21:18:42 -- common/autotest_common.sh@10 -- # set +x 00:13:28.395 21:18:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.395 21:18:42 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.395 21:18:42 -- common/autotest_common.sh@638 -- # local es=0 00:13:28.395 21:18:42 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.396 21:18:42 -- common/autotest_common.sh@626 -- # local arg=nvme 00:13:28.396 21:18:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:28.396 21:18:42 -- common/autotest_common.sh@630 -- # type -t nvme 00:13:28.396 21:18:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:28.396 21:18:42 -- common/autotest_common.sh@632 -- # type -P nvme 00:13:28.396 21:18:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:28.396 21:18:42 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:13:28.396 21:18:42 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:13:28.396 21:18:42 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.396 [2024-04-24 21:18:42.932764] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2' 00:13:28.396 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:28.396 could not add new controller: failed to write to nvme-fabrics device 00:13:28.396 21:18:42 -- common/autotest_common.sh@641 -- # es=1 00:13:28.396 21:18:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:28.396 21:18:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:28.396 21:18:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:28.396 21:18:42 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:28.396 21:18:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.396 21:18:42 -- common/autotest_common.sh@10 -- # set +x 00:13:28.396 21:18:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.396 21:18:42 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.777 21:18:44 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.777 21:18:44 -- common/autotest_common.sh@1184 -- # local i=0 00:13:29.777 21:18:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.777 21:18:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:29.777 21:18:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:31.687 21:18:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:31.687 21:18:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:31.687 21:18:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.687 21:18:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:31.687 21:18:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.687 21:18:46 -- common/autotest_common.sh@1194 -- # return 0 00:13:31.687 21:18:46 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.947 21:18:46 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:31.947 21:18:46 -- common/autotest_common.sh@1205 -- # local i=0 00:13:31.947 21:18:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:31.947 21:18:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.947 21:18:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:31.947 21:18:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.947 21:18:46 -- common/autotest_common.sh@1217 -- # return 0 00:13:31.947 21:18:46 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.947 21:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.947 21:18:46 -- common/autotest_common.sh@10 -- # set +x 00:13:31.947 21:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.947 21:18:46 -- target/rpc.sh@81 -- # seq 1 5 00:13:31.947 21:18:46 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:31.947 21:18:46 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:31.947 21:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.947 21:18:46 -- common/autotest_common.sh@10 -- # set +x 00:13:31.947 21:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.947 21:18:46 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.947 21:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.947 21:18:46 -- common/autotest_common.sh@10 -- # set +x 00:13:31.948 [2024-04-24 21:18:46.741304] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.948 21:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.948 21:18:46 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:31.948 21:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.948 21:18:46 -- common/autotest_common.sh@10 -- # set +x 00:13:31.948 21:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.948 21:18:46 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:31.948 21:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.948 21:18:46 -- common/autotest_common.sh@10 -- # set +x 00:13:31.948 21:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.948 21:18:46 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.326 21:18:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:33.326 21:18:48 -- common/autotest_common.sh@1184 -- # local i=0 00:13:33.326 21:18:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.326 21:18:48 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:33.326 21:18:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:35.860 21:18:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:35.860 21:18:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:35.860 21:18:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.860 21:18:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:35.860 21:18:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.860 21:18:50 -- common/autotest_common.sh@1194 -- # return 0 00:13:35.860 21:18:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.860 21:18:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.860 21:18:50 -- common/autotest_common.sh@1205 -- # local i=0 00:13:35.860 21:18:50 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:35.860 21:18:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.860 21:18:50 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.860 21:18:50 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:35.860 21:18:50 -- common/autotest_common.sh@1217 -- # return 0 00:13:35.860 21:18:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.860 21:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.860 21:18:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.860 21:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.861 21:18:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.861 21:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.861 21:18:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.861 21:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.861 21:18:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:35.861 21:18:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:35.861 21:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.861 21:18:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.861 21:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.861 21:18:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.861 21:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.861 21:18:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.861 [2024-04-24 21:18:50.484994] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.861 21:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.861 21:18:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:35.861 21:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.861 21:18:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.861 21:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.861 21:18:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:35.861 21:18:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:35.861 21:18:50 -- common/autotest_common.sh@10 -- # set +x 00:13:35.861 21:18:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.861 21:18:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.240 21:18:51 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:37.240 21:18:51 -- common/autotest_common.sh@1184 -- # local i=0 00:13:37.240 21:18:51 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.240 21:18:51 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:37.240 21:18:51 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:39.147 21:18:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:39.147 21:18:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:39.147 21:18:53 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.147 21:18:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:39.147 21:18:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.147 21:18:53 -- common/autotest_common.sh@1194 -- # return 0 00:13:39.147 21:18:53 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.408 21:18:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:39.408 21:18:54 -- common/autotest_common.sh@1205 -- # local i=0 00:13:39.408 21:18:54 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:39.408 21:18:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.408 21:18:54 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:39.408 21:18:54 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.408 21:18:54 -- common/autotest_common.sh@1217 -- # return 0 00:13:39.408 21:18:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.408 21:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.408 21:18:54 -- common/autotest_common.sh@10 -- # set +x 00:13:39.408 21:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.408 21:18:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.408 21:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.408 21:18:54 -- common/autotest_common.sh@10 -- # set +x 00:13:39.408 21:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.408 21:18:54 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:39.408 21:18:54 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:39.408 21:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.408 21:18:54 -- common/autotest_common.sh@10 -- # set +x 00:13:39.408 21:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.408 21:18:54 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.408 21:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.408 21:18:54 -- common/autotest_common.sh@10 -- # set +x 00:13:39.408 [2024-04-24 21:18:54.197309] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.408 21:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.408 21:18:54 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:39.408 21:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.408 21:18:54 -- common/autotest_common.sh@10 -- # set +x 00:13:39.408 21:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.408 21:18:54 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:39.408 21:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.408 21:18:54 -- common/autotest_common.sh@10 -- # set +x 00:13:39.408 21:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.408 21:18:54 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:40.787 21:18:55 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:40.787 21:18:55 -- common/autotest_common.sh@1184 -- # local i=0 00:13:40.787 21:18:55 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.787 21:18:55 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:40.787 21:18:55 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:43.327 21:18:57 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:43.327 21:18:57 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:43.327 21:18:57 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.327 21:18:57 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:43.327 21:18:57 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.327 21:18:57 -- common/autotest_common.sh@1194 -- # return 0 00:13:43.327 21:18:57 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.327 21:18:57 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.327 21:18:57 -- common/autotest_common.sh@1205 -- # local i=0 00:13:43.327 21:18:57 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:43.327 21:18:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.327 21:18:57 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:43.327 21:18:57 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.327 21:18:57 -- common/autotest_common.sh@1217 -- # return 0 00:13:43.327 21:18:57 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.327 21:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.327 21:18:57 -- common/autotest_common.sh@10 -- # set +x 00:13:43.327 21:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.327 21:18:57 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.327 21:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.327 21:18:57 -- common/autotest_common.sh@10 -- # set +x 00:13:43.327 21:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.327 21:18:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:43.327 21:18:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:43.327 21:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.327 21:18:57 -- common/autotest_common.sh@10 -- # set +x 00:13:43.327 21:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.327 21:18:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.327 21:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.327 21:18:57 -- common/autotest_common.sh@10 -- # set +x 00:13:43.327 [2024-04-24 21:18:57.945738] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.327 21:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.327 21:18:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:43.327 21:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.327 21:18:57 -- common/autotest_common.sh@10 -- # set +x 00:13:43.327 21:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.327 21:18:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:43.327 21:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:43.327 21:18:57 -- common/autotest_common.sh@10 -- # set +x 00:13:43.327 21:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:43.327 21:18:57 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:44.705 21:18:59 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:44.705 21:18:59 -- common/autotest_common.sh@1184 -- # local i=0 00:13:44.705 21:18:59 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.705 21:18:59 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:44.705 21:18:59 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:46.612 21:19:01 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:46.612 21:19:01 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:46.612 21:19:01 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.612 21:19:01 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:46.612 21:19:01 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.612 21:19:01 -- common/autotest_common.sh@1194 -- # return 0 00:13:46.612 21:19:01 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:46.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.873 21:19:01 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:46.873 21:19:01 -- common/autotest_common.sh@1205 -- # local i=0 00:13:46.873 21:19:01 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:46.873 21:19:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:46.873 21:19:01 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:46.873 21:19:01 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:46.873 21:19:01 -- common/autotest_common.sh@1217 -- # return 0 00:13:46.873 21:19:01 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:46.873 21:19:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.873 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:13:46.873 21:19:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.873 21:19:01 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:46.873 21:19:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.873 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:13:46.873 21:19:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.873 21:19:01 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:46.873 21:19:01 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:46.873 21:19:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.873 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:13:46.873 21:19:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.873 21:19:01 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.873 21:19:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.873 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:13:46.873 [2024-04-24 21:19:01.677752] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.873 21:19:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.873 21:19:01 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:46.873 21:19:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.873 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:13:46.873 21:19:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.873 21:19:01 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:46.873 21:19:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.873 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:13:46.873 21:19:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.873 21:19:01 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:48.262 21:19:03 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:48.262 21:19:03 -- common/autotest_common.sh@1184 -- # local i=0 00:13:48.262 21:19:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:48.262 21:19:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:48.262 21:19:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:50.177 21:19:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:50.439 21:19:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:50.439 21:19:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:50.439 21:19:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:50.439 21:19:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:50.439 21:19:05 -- common/autotest_common.sh@1194 -- # return 0 00:13:50.439 21:19:05 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.439 21:19:05 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:50.439 21:19:05 -- common/autotest_common.sh@1205 -- # local i=0 00:13:50.439 21:19:05 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:50.439 21:19:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.439 21:19:05 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:50.439 21:19:05 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.439 21:19:05 -- common/autotest_common.sh@1217 -- # return 0 00:13:50.439 21:19:05 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.439 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.439 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.439 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.439 21:19:05 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.439 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.439 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.439 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.439 21:19:05 -- target/rpc.sh@99 -- # seq 1 5 00:13:50.439 21:19:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.439 21:19:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.439 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.439 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.699 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.699 21:19:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.699 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.699 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.699 [2024-04-24 21:19:05.412608] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.699 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.699 21:19:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.699 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.699 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.699 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.699 21:19:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.699 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.700 21:19:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 [2024-04-24 21:19:05.460592] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.700 21:19:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 [2024-04-24 21:19:05.508657] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.700 21:19:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 [2024-04-24 21:19:05.556690] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.700 21:19:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 [2024-04-24 21:19:05.604755] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.700 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.700 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.700 21:19:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.701 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.701 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.701 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.701 21:19:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.701 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.701 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.701 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.701 21:19:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.701 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.701 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.701 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.701 21:19:05 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:50.701 21:19:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.701 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:13:50.701 21:19:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.701 21:19:05 -- target/rpc.sh@110 -- # stats='{ 00:13:50.701 "tick_rate": 1900000000, 00:13:50.701 "poll_groups": [ 00:13:50.701 { 00:13:50.701 "name": "nvmf_tgt_poll_group_0", 00:13:50.701 "admin_qpairs": 0, 00:13:50.701 "io_qpairs": 224, 00:13:50.701 "current_admin_qpairs": 0, 00:13:50.701 "current_io_qpairs": 0, 00:13:50.701 "pending_bdev_io": 0, 00:13:50.701 "completed_nvme_io": 225, 00:13:50.701 "transports": [ 00:13:50.701 { 00:13:50.701 "trtype": "TCP" 00:13:50.701 } 00:13:50.701 ] 00:13:50.701 }, 00:13:50.701 { 00:13:50.701 "name": "nvmf_tgt_poll_group_1", 00:13:50.701 "admin_qpairs": 1, 00:13:50.701 "io_qpairs": 223, 00:13:50.701 "current_admin_qpairs": 0, 00:13:50.701 "current_io_qpairs": 0, 00:13:50.701 "pending_bdev_io": 0, 00:13:50.701 "completed_nvme_io": 510, 00:13:50.701 "transports": [ 00:13:50.701 { 00:13:50.701 "trtype": "TCP" 00:13:50.701 } 00:13:50.701 ] 00:13:50.701 }, 00:13:50.701 { 00:13:50.701 "name": "nvmf_tgt_poll_group_2", 00:13:50.701 "admin_qpairs": 6, 00:13:50.701 "io_qpairs": 218, 00:13:50.701 "current_admin_qpairs": 0, 00:13:50.701 "current_io_qpairs": 0, 00:13:50.701 "pending_bdev_io": 0, 00:13:50.701 "completed_nvme_io": 219, 00:13:50.701 "transports": [ 00:13:50.701 { 00:13:50.701 "trtype": "TCP" 00:13:50.701 } 00:13:50.701 ] 00:13:50.701 }, 00:13:50.701 { 00:13:50.701 "name": "nvmf_tgt_poll_group_3", 00:13:50.701 "admin_qpairs": 0, 00:13:50.701 "io_qpairs": 224, 00:13:50.701 "current_admin_qpairs": 0, 00:13:50.701 "current_io_qpairs": 0, 00:13:50.701 "pending_bdev_io": 0, 00:13:50.701 "completed_nvme_io": 285, 00:13:50.701 "transports": [ 00:13:50.701 { 00:13:50.701 "trtype": "TCP" 00:13:50.701 } 00:13:50.701 ] 00:13:50.701 } 00:13:50.701 ] 00:13:50.701 }' 00:13:50.701 21:19:05 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:50.701 21:19:05 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:50.960 21:19:05 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:50.960 21:19:05 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:50.960 21:19:05 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:50.960 21:19:05 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:50.960 21:19:05 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:50.960 21:19:05 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:50.960 21:19:05 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:50.960 21:19:05 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:50.960 21:19:05 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:50.960 21:19:05 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:50.960 21:19:05 -- target/rpc.sh@123 -- # nvmftestfini 00:13:50.960 21:19:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:50.960 21:19:05 -- nvmf/common.sh@117 -- # sync 00:13:50.960 21:19:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:50.960 21:19:05 -- nvmf/common.sh@120 -- # set +e 00:13:50.960 21:19:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:50.960 21:19:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:50.960 rmmod nvme_tcp 00:13:50.960 rmmod nvme_fabrics 00:13:50.960 rmmod nvme_keyring 00:13:50.960 21:19:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:50.960 21:19:05 -- nvmf/common.sh@124 -- # set -e 00:13:50.960 21:19:05 -- nvmf/common.sh@125 -- # return 0 00:13:50.961 21:19:05 -- nvmf/common.sh@478 -- # '[' -n 1124795 ']' 00:13:50.961 21:19:05 -- nvmf/common.sh@479 -- # killprocess 1124795 00:13:50.961 21:19:05 -- common/autotest_common.sh@936 -- # '[' -z 1124795 ']' 00:13:50.961 21:19:05 -- common/autotest_common.sh@940 -- # kill -0 1124795 00:13:50.961 21:19:05 -- common/autotest_common.sh@941 -- # uname 00:13:50.961 21:19:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:50.961 21:19:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1124795 00:13:50.961 21:19:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:50.961 21:19:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:50.961 21:19:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1124795' 00:13:50.961 killing process with pid 1124795 00:13:50.961 21:19:05 -- common/autotest_common.sh@955 -- # kill 1124795 00:13:50.961 21:19:05 -- common/autotest_common.sh@960 -- # wait 1124795 00:13:51.528 21:19:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:51.528 21:19:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:51.528 21:19:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:51.528 21:19:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.528 21:19:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:51.528 21:19:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.529 21:19:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.529 21:19:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.074 21:19:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:54.074 00:13:54.074 real 0m35.975s 00:13:54.074 user 1m53.139s 00:13:54.074 sys 0m5.770s 00:13:54.074 21:19:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:54.074 21:19:08 -- common/autotest_common.sh@10 -- # set +x 00:13:54.074 ************************************ 00:13:54.074 END TEST nvmf_rpc 00:13:54.074 ************************************ 00:13:54.074 21:19:08 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:54.074 21:19:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:54.074 21:19:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:54.074 21:19:08 -- common/autotest_common.sh@10 -- # set +x 00:13:54.074 ************************************ 00:13:54.074 START TEST nvmf_invalid 00:13:54.074 ************************************ 00:13:54.074 21:19:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:54.074 * Looking for test storage... 00:13:54.074 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:54.074 21:19:08 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.074 21:19:08 -- nvmf/common.sh@7 -- # uname -s 00:13:54.074 21:19:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.074 21:19:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.074 21:19:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.074 21:19:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.074 21:19:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.074 21:19:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.074 21:19:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.074 21:19:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.074 21:19:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.074 21:19:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.074 21:19:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:13:54.074 21:19:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:13:54.074 21:19:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.074 21:19:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.074 21:19:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:54.074 21:19:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.074 21:19:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:54.074 21:19:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.074 21:19:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.074 21:19:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.074 21:19:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.075 21:19:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.075 21:19:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.075 21:19:08 -- paths/export.sh@5 -- # export PATH 00:13:54.075 21:19:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.075 21:19:08 -- nvmf/common.sh@47 -- # : 0 00:13:54.075 21:19:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:54.075 21:19:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:54.075 21:19:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.075 21:19:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.075 21:19:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.075 21:19:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:54.075 21:19:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:54.075 21:19:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:54.075 21:19:08 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:54.075 21:19:08 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:54.075 21:19:08 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:54.075 21:19:08 -- target/invalid.sh@14 -- # target=foobar 00:13:54.075 21:19:08 -- target/invalid.sh@16 -- # RANDOM=0 00:13:54.075 21:19:08 -- target/invalid.sh@34 -- # nvmftestinit 00:13:54.075 21:19:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:54.075 21:19:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.075 21:19:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:54.075 21:19:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:54.075 21:19:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:54.075 21:19:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.075 21:19:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.075 21:19:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.075 21:19:08 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:13:54.075 21:19:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:54.075 21:19:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:54.075 21:19:08 -- common/autotest_common.sh@10 -- # set +x 00:13:59.357 21:19:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:59.357 21:19:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:59.357 21:19:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:59.357 21:19:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:59.357 21:19:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:59.357 21:19:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:59.357 21:19:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:59.357 21:19:13 -- nvmf/common.sh@295 -- # net_devs=() 00:13:59.357 21:19:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:59.357 21:19:13 -- nvmf/common.sh@296 -- # e810=() 00:13:59.357 21:19:13 -- nvmf/common.sh@296 -- # local -ga e810 00:13:59.357 21:19:13 -- nvmf/common.sh@297 -- # x722=() 00:13:59.357 21:19:13 -- nvmf/common.sh@297 -- # local -ga x722 00:13:59.357 21:19:13 -- nvmf/common.sh@298 -- # mlx=() 00:13:59.357 21:19:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:59.357 21:19:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.357 21:19:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.357 21:19:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.357 21:19:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.357 21:19:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.357 21:19:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.357 21:19:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.357 21:19:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.357 21:19:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.357 21:19:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.357 21:19:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.357 21:19:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:59.357 21:19:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:59.357 21:19:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.357 21:19:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:59.357 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:59.357 21:19:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.357 21:19:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:59.357 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:59.357 21:19:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:59.357 21:19:13 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:59.357 21:19:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.357 21:19:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.357 21:19:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:59.357 21:19:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.357 21:19:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:59.357 Found net devices under 0000:27:00.0: cvl_0_0 00:13:59.357 21:19:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.357 21:19:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.357 21:19:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.357 21:19:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:59.358 21:19:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.358 21:19:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:59.358 Found net devices under 0000:27:00.1: cvl_0_1 00:13:59.358 21:19:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.358 21:19:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:59.358 21:19:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:59.358 21:19:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:59.358 21:19:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:59.358 21:19:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:59.358 21:19:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.358 21:19:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.358 21:19:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.358 21:19:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:59.358 21:19:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.358 21:19:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.358 21:19:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:59.358 21:19:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.358 21:19:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.358 21:19:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:59.358 21:19:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:59.358 21:19:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.358 21:19:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.358 21:19:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.358 21:19:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.358 21:19:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:59.358 21:19:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.358 21:19:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.358 21:19:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.358 21:19:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:59.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:13:59.358 00:13:59.358 --- 10.0.0.2 ping statistics --- 00:13:59.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.358 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:13:59.358 21:19:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:13:59.358 00:13:59.358 --- 10.0.0.1 ping statistics --- 00:13:59.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.358 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:13:59.358 21:19:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.358 21:19:14 -- nvmf/common.sh@411 -- # return 0 00:13:59.358 21:19:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:59.358 21:19:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.358 21:19:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:59.358 21:19:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:59.358 21:19:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.358 21:19:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:59.358 21:19:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:59.358 21:19:14 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:59.358 21:19:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:59.358 21:19:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:59.358 21:19:14 -- common/autotest_common.sh@10 -- # set +x 00:13:59.358 21:19:14 -- nvmf/common.sh@470 -- # nvmfpid=1134264 00:13:59.358 21:19:14 -- nvmf/common.sh@471 -- # waitforlisten 1134264 00:13:59.358 21:19:14 -- common/autotest_common.sh@817 -- # '[' -z 1134264 ']' 00:13:59.358 21:19:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:59.358 21:19:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.358 21:19:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:59.358 21:19:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.358 21:19:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:59.358 21:19:14 -- common/autotest_common.sh@10 -- # set +x 00:13:59.358 [2024-04-24 21:19:14.185326] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:13:59.358 [2024-04-24 21:19:14.185425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.358 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.358 [2024-04-24 21:19:14.306281] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.619 [2024-04-24 21:19:14.405204] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.619 [2024-04-24 21:19:14.405240] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.619 [2024-04-24 21:19:14.405251] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.619 [2024-04-24 21:19:14.405260] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.619 [2024-04-24 21:19:14.405274] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.619 [2024-04-24 21:19:14.405355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.619 [2024-04-24 21:19:14.405460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.619 [2024-04-24 21:19:14.405558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.619 [2024-04-24 21:19:14.405575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.189 21:19:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:00.189 21:19:14 -- common/autotest_common.sh@850 -- # return 0 00:14:00.189 21:19:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:00.189 21:19:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:00.189 21:19:14 -- common/autotest_common.sh@10 -- # set +x 00:14:00.189 21:19:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.189 21:19:14 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:00.189 21:19:14 -- target/invalid.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31112 00:14:00.189 [2024-04-24 21:19:15.074321] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:00.189 21:19:15 -- target/invalid.sh@40 -- # out='request: 00:14:00.189 { 00:14:00.189 "nqn": "nqn.2016-06.io.spdk:cnode31112", 00:14:00.189 "tgt_name": "foobar", 00:14:00.190 "method": "nvmf_create_subsystem", 00:14:00.190 "req_id": 1 00:14:00.190 } 00:14:00.190 Got JSON-RPC error response 00:14:00.190 response: 00:14:00.190 { 00:14:00.190 "code": -32603, 00:14:00.190 "message": "Unable to find target foobar" 00:14:00.190 }' 00:14:00.190 21:19:15 -- target/invalid.sh@41 -- # [[ request: 00:14:00.190 { 00:14:00.190 "nqn": "nqn.2016-06.io.spdk:cnode31112", 00:14:00.190 "tgt_name": "foobar", 00:14:00.190 "method": "nvmf_create_subsystem", 00:14:00.190 "req_id": 1 00:14:00.190 } 00:14:00.190 Got JSON-RPC error response 00:14:00.190 response: 00:14:00.190 { 00:14:00.190 "code": -32603, 00:14:00.190 "message": "Unable to find target foobar" 00:14:00.190 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:00.190 21:19:15 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:00.190 21:19:15 -- target/invalid.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2968 00:14:00.450 [2024-04-24 21:19:15.242577] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2968: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:00.450 21:19:15 -- target/invalid.sh@45 -- # out='request: 00:14:00.450 { 00:14:00.450 "nqn": "nqn.2016-06.io.spdk:cnode2968", 00:14:00.450 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:00.450 "method": "nvmf_create_subsystem", 00:14:00.450 "req_id": 1 00:14:00.450 } 00:14:00.450 Got JSON-RPC error response 00:14:00.450 response: 00:14:00.450 { 00:14:00.450 "code": -32602, 00:14:00.450 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:00.450 }' 00:14:00.450 21:19:15 -- target/invalid.sh@46 -- # [[ request: 00:14:00.450 { 00:14:00.450 "nqn": "nqn.2016-06.io.spdk:cnode2968", 00:14:00.450 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:00.450 "method": "nvmf_create_subsystem", 00:14:00.450 "req_id": 1 00:14:00.450 } 00:14:00.450 Got JSON-RPC error response 00:14:00.450 response: 00:14:00.450 { 00:14:00.450 "code": -32602, 00:14:00.450 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:00.450 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:00.450 21:19:15 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:00.450 21:19:15 -- target/invalid.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9342 00:14:00.450 [2024-04-24 21:19:15.410785] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9342: invalid model number 'SPDK_Controller' 00:14:00.709 21:19:15 -- target/invalid.sh@50 -- # out='request: 00:14:00.709 { 00:14:00.709 "nqn": "nqn.2016-06.io.spdk:cnode9342", 00:14:00.709 "model_number": "SPDK_Controller\u001f", 00:14:00.709 "method": "nvmf_create_subsystem", 00:14:00.709 "req_id": 1 00:14:00.709 } 00:14:00.709 Got JSON-RPC error response 00:14:00.709 response: 00:14:00.709 { 00:14:00.709 "code": -32602, 00:14:00.709 "message": "Invalid MN SPDK_Controller\u001f" 00:14:00.709 }' 00:14:00.709 21:19:15 -- target/invalid.sh@51 -- # [[ request: 00:14:00.709 { 00:14:00.709 "nqn": "nqn.2016-06.io.spdk:cnode9342", 00:14:00.709 "model_number": "SPDK_Controller\u001f", 00:14:00.709 "method": "nvmf_create_subsystem", 00:14:00.709 "req_id": 1 00:14:00.709 } 00:14:00.709 Got JSON-RPC error response 00:14:00.709 response: 00:14:00.709 { 00:14:00.709 "code": -32602, 00:14:00.709 "message": "Invalid MN SPDK_Controller\u001f" 00:14:00.709 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:00.709 21:19:15 -- target/invalid.sh@54 -- # gen_random_s 21 00:14:00.709 21:19:15 -- target/invalid.sh@19 -- # local length=21 ll 00:14:00.709 21:19:15 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:00.709 21:19:15 -- target/invalid.sh@21 -- # local chars 00:14:00.709 21:19:15 -- target/invalid.sh@22 -- # local string 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # printf %x 37 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # string+=% 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # printf %x 76 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # string+=L 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # printf %x 71 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # string+=G 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # printf %x 75 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # string+=K 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # printf %x 87 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # string+=W 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # printf %x 44 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # string+=, 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # printf %x 61 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # string+== 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # printf %x 105 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # string+=i 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # printf %x 110 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:00.709 21:19:15 -- target/invalid.sh@25 -- # string+=n 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.709 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # printf %x 35 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # string+='#' 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # printf %x 48 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # string+=0 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # printf %x 90 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # string+=Z 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # printf %x 120 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # string+=x 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # printf %x 43 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # string+=+ 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # printf %x 42 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # string+='*' 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # printf %x 40 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # string+='(' 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # printf %x 49 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # string+=1 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # printf %x 53 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # string+=5 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # printf %x 73 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # string+=I 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # printf %x 92 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # string+='\' 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # printf %x 120 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:00.710 21:19:15 -- target/invalid.sh@25 -- # string+=x 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.710 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.710 21:19:15 -- target/invalid.sh@28 -- # [[ % == \- ]] 00:14:00.710 21:19:15 -- target/invalid.sh@31 -- # echo '%LGKW,=in#0Zx+*(15I\x' 00:14:00.710 21:19:15 -- target/invalid.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '%LGKW,=in#0Zx+*(15I\x' nqn.2016-06.io.spdk:cnode4316 00:14:00.970 [2024-04-24 21:19:15.695137] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4316: invalid serial number '%LGKW,=in#0Zx+*(15I\x' 00:14:00.970 21:19:15 -- target/invalid.sh@54 -- # out='request: 00:14:00.970 { 00:14:00.970 "nqn": "nqn.2016-06.io.spdk:cnode4316", 00:14:00.970 "serial_number": "%LGKW,=in#0Zx+*(15I\\x", 00:14:00.970 "method": "nvmf_create_subsystem", 00:14:00.970 "req_id": 1 00:14:00.970 } 00:14:00.970 Got JSON-RPC error response 00:14:00.970 response: 00:14:00.970 { 00:14:00.970 "code": -32602, 00:14:00.970 "message": "Invalid SN %LGKW,=in#0Zx+*(15I\\x" 00:14:00.970 }' 00:14:00.970 21:19:15 -- target/invalid.sh@55 -- # [[ request: 00:14:00.970 { 00:14:00.970 "nqn": "nqn.2016-06.io.spdk:cnode4316", 00:14:00.970 "serial_number": "%LGKW,=in#0Zx+*(15I\\x", 00:14:00.970 "method": "nvmf_create_subsystem", 00:14:00.970 "req_id": 1 00:14:00.970 } 00:14:00.970 Got JSON-RPC error response 00:14:00.970 response: 00:14:00.970 { 00:14:00.970 "code": -32602, 00:14:00.970 "message": "Invalid SN %LGKW,=in#0Zx+*(15I\\x" 00:14:00.970 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:00.970 21:19:15 -- target/invalid.sh@58 -- # gen_random_s 41 00:14:00.970 21:19:15 -- target/invalid.sh@19 -- # local length=41 ll 00:14:00.970 21:19:15 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:00.970 21:19:15 -- target/invalid.sh@21 -- # local chars 00:14:00.970 21:19:15 -- target/invalid.sh@22 -- # local string 00:14:00.970 21:19:15 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:00.970 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # printf %x 99 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # string+=c 00:14:00.970 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.970 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # printf %x 121 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # string+=y 00:14:00.970 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.970 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # printf %x 73 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # string+=I 00:14:00.970 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.970 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # printf %x 64 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # string+=@ 00:14:00.970 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.970 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # printf %x 42 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:00.970 21:19:15 -- target/invalid.sh@25 -- # string+='*' 00:14:00.970 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.970 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 93 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=']' 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 86 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=V 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 91 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+='[' 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 87 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=W 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 38 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+='&' 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 97 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=a 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 57 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=9 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 94 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+='^' 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 40 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+='(' 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 75 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=K 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 87 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=W 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 43 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=+ 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 50 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=2 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 40 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+='(' 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 94 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+='^' 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 96 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+='`' 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 85 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=U 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 109 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=m 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 46 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=. 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 67 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=C 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 85 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=U 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 56 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=8 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 43 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=+ 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 77 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=M 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 83 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=S 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 87 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=W 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 86 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=V 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 115 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=s 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 107 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=k 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # printf %x 54 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:00.971 21:19:15 -- target/invalid.sh@25 -- # string+=6 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.971 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # printf %x 60 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # string+='<' 00:14:01.232 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.232 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # printf %x 100 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # string+=d 00:14:01.232 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.232 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # printf %x 118 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # string+=v 00:14:01.232 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.232 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # printf %x 104 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # string+=h 00:14:01.232 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.232 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # printf %x 114 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # string+=r 00:14:01.232 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.232 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # printf %x 95 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:01.232 21:19:15 -- target/invalid.sh@25 -- # string+=_ 00:14:01.232 21:19:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.232 21:19:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.232 21:19:15 -- target/invalid.sh@28 -- # [[ c == \- ]] 00:14:01.232 21:19:15 -- target/invalid.sh@31 -- # echo 'cyI@*]V[W&a9^(KW+2(^`Um.CU8+MSWVsk6 /dev/null' 00:14:03.350 21:19:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.253 21:19:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:05.253 00:14:05.253 real 0m11.476s 00:14:05.253 user 0m17.493s 00:14:05.253 sys 0m4.979s 00:14:05.253 21:19:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:05.253 21:19:20 -- common/autotest_common.sh@10 -- # set +x 00:14:05.253 ************************************ 00:14:05.253 END TEST nvmf_invalid 00:14:05.253 ************************************ 00:14:05.253 21:19:20 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:05.254 21:19:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:05.254 21:19:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:05.254 21:19:20 -- common/autotest_common.sh@10 -- # set +x 00:14:05.512 ************************************ 00:14:05.512 START TEST nvmf_abort 00:14:05.512 ************************************ 00:14:05.512 21:19:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:05.512 * Looking for test storage... 00:14:05.512 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:05.512 21:19:20 -- target/abort.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:05.512 21:19:20 -- nvmf/common.sh@7 -- # uname -s 00:14:05.512 21:19:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.512 21:19:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.512 21:19:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.512 21:19:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.512 21:19:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.513 21:19:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.513 21:19:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.513 21:19:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.513 21:19:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.513 21:19:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.513 21:19:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:14:05.513 21:19:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:14:05.513 21:19:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.513 21:19:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.513 21:19:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:05.513 21:19:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.513 21:19:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:05.513 21:19:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.513 21:19:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.513 21:19:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.513 21:19:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.513 21:19:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.513 21:19:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.513 21:19:20 -- paths/export.sh@5 -- # export PATH 00:14:05.513 21:19:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.513 21:19:20 -- nvmf/common.sh@47 -- # : 0 00:14:05.513 21:19:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.513 21:19:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.513 21:19:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.513 21:19:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.513 21:19:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.513 21:19:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.513 21:19:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.513 21:19:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.513 21:19:20 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:05.513 21:19:20 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:05.513 21:19:20 -- target/abort.sh@14 -- # nvmftestinit 00:14:05.513 21:19:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:05.513 21:19:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.513 21:19:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:05.513 21:19:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:05.513 21:19:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:05.513 21:19:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.513 21:19:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.513 21:19:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.513 21:19:20 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:14:05.513 21:19:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:05.513 21:19:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:05.513 21:19:20 -- common/autotest_common.sh@10 -- # set +x 00:14:10.907 21:19:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:10.907 21:19:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.907 21:19:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.907 21:19:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.907 21:19:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.907 21:19:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.907 21:19:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.907 21:19:25 -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.907 21:19:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.907 21:19:25 -- nvmf/common.sh@296 -- # e810=() 00:14:10.907 21:19:25 -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.907 21:19:25 -- nvmf/common.sh@297 -- # x722=() 00:14:10.907 21:19:25 -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.907 21:19:25 -- nvmf/common.sh@298 -- # mlx=() 00:14:10.907 21:19:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.907 21:19:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.907 21:19:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.907 21:19:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.907 21:19:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.907 21:19:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.907 21:19:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.907 21:19:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.907 21:19:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.907 21:19:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.907 21:19:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.907 21:19:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.907 21:19:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.907 21:19:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.907 21:19:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.907 21:19:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:10.907 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:10.907 21:19:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.907 21:19:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:10.907 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:10.907 21:19:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.907 21:19:25 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.907 21:19:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.907 21:19:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:10.907 21:19:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.907 21:19:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:10.907 Found net devices under 0000:27:00.0: cvl_0_0 00:14:10.907 21:19:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.907 21:19:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.907 21:19:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.907 21:19:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:10.907 21:19:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.907 21:19:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:10.907 Found net devices under 0000:27:00.1: cvl_0_1 00:14:10.907 21:19:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.907 21:19:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:10.907 21:19:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:10.907 21:19:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:10.907 21:19:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:10.907 21:19:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.907 21:19:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.907 21:19:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.907 21:19:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.907 21:19:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.907 21:19:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.907 21:19:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.907 21:19:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.907 21:19:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.907 21:19:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.907 21:19:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:10.907 21:19:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.907 21:19:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.907 21:19:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.168 21:19:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.168 21:19:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:11.168 21:19:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.168 21:19:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.168 21:19:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.168 21:19:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:11.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:14:11.168 00:14:11.168 --- 10.0.0.2 ping statistics --- 00:14:11.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.168 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:14:11.168 21:19:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:14:11.168 00:14:11.168 --- 10.0.0.1 ping statistics --- 00:14:11.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.168 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:14:11.168 21:19:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.168 21:19:25 -- nvmf/common.sh@411 -- # return 0 00:14:11.168 21:19:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:11.168 21:19:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.168 21:19:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:11.168 21:19:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:11.168 21:19:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.168 21:19:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:11.168 21:19:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:11.168 21:19:26 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:11.168 21:19:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:11.168 21:19:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:11.168 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:14:11.168 21:19:26 -- nvmf/common.sh@470 -- # nvmfpid=1139232 00:14:11.168 21:19:26 -- nvmf/common.sh@471 -- # waitforlisten 1139232 00:14:11.168 21:19:26 -- common/autotest_common.sh@817 -- # '[' -z 1139232 ']' 00:14:11.168 21:19:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.168 21:19:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:11.168 21:19:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.168 21:19:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:11.168 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:14:11.168 21:19:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:11.168 [2024-04-24 21:19:26.131016] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:14:11.168 [2024-04-24 21:19:26.131144] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.429 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.429 [2024-04-24 21:19:26.271077] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:11.429 [2024-04-24 21:19:26.368795] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.430 [2024-04-24 21:19:26.368846] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.430 [2024-04-24 21:19:26.368857] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.430 [2024-04-24 21:19:26.368866] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.430 [2024-04-24 21:19:26.368875] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.430 [2024-04-24 21:19:26.368950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.430 [2024-04-24 21:19:26.369063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.430 [2024-04-24 21:19:26.369070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.997 21:19:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:11.997 21:19:26 -- common/autotest_common.sh@850 -- # return 0 00:14:11.997 21:19:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:11.997 21:19:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:11.997 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:14:11.997 21:19:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.997 21:19:26 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:11.997 21:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.997 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:14:11.997 [2024-04-24 21:19:26.846117] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.997 21:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.997 21:19:26 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:11.997 21:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.997 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:14:11.997 Malloc0 00:14:11.997 21:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.997 21:19:26 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:11.997 21:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.997 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:14:11.997 Delay0 00:14:11.997 21:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.997 21:19:26 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:11.997 21:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.997 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:14:11.997 21:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.997 21:19:26 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:11.997 21:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.997 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:14:11.997 21:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.997 21:19:26 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:11.997 21:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.997 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:14:11.997 [2024-04-24 21:19:26.935099] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.997 21:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.997 21:19:26 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:11.997 21:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.997 21:19:26 -- common/autotest_common.sh@10 -- # set +x 00:14:11.997 21:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.997 21:19:26 -- target/abort.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:12.256 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.256 [2024-04-24 21:19:27.119537] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:14.795 Initializing NVMe Controllers 00:14:14.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:14.795 controller IO queue size 128 less than required 00:14:14.795 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:14.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:14.795 Initialization complete. Launching workers. 00:14:14.795 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 47711 00:14:14.795 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 47776, failed to submit 62 00:14:14.795 success 47715, unsuccess 61, failed 0 00:14:14.795 21:19:29 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:14.795 21:19:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.795 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:14:14.795 21:19:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.795 21:19:29 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:14.795 21:19:29 -- target/abort.sh@38 -- # nvmftestfini 00:14:14.795 21:19:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:14.795 21:19:29 -- nvmf/common.sh@117 -- # sync 00:14:14.795 21:19:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:14.795 21:19:29 -- nvmf/common.sh@120 -- # set +e 00:14:14.795 21:19:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:14.795 21:19:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:14.795 rmmod nvme_tcp 00:14:14.795 rmmod nvme_fabrics 00:14:14.795 rmmod nvme_keyring 00:14:14.795 21:19:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:14.795 21:19:29 -- nvmf/common.sh@124 -- # set -e 00:14:14.795 21:19:29 -- nvmf/common.sh@125 -- # return 0 00:14:14.795 21:19:29 -- nvmf/common.sh@478 -- # '[' -n 1139232 ']' 00:14:14.795 21:19:29 -- nvmf/common.sh@479 -- # killprocess 1139232 00:14:14.795 21:19:29 -- common/autotest_common.sh@936 -- # '[' -z 1139232 ']' 00:14:14.795 21:19:29 -- common/autotest_common.sh@940 -- # kill -0 1139232 00:14:14.795 21:19:29 -- common/autotest_common.sh@941 -- # uname 00:14:14.795 21:19:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:14.795 21:19:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1139232 00:14:14.795 21:19:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:14.795 21:19:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:14.795 21:19:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1139232' 00:14:14.795 killing process with pid 1139232 00:14:14.795 21:19:29 -- common/autotest_common.sh@955 -- # kill 1139232 00:14:14.795 21:19:29 -- common/autotest_common.sh@960 -- # wait 1139232 00:14:15.054 21:19:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:15.054 21:19:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:15.054 21:19:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:15.054 21:19:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.054 21:19:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.054 21:19:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.054 21:19:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.054 21:19:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.614 21:19:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:17.614 00:14:17.614 real 0m11.768s 00:14:17.614 user 0m14.225s 00:14:17.614 sys 0m4.884s 00:14:17.614 21:19:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:17.614 21:19:32 -- common/autotest_common.sh@10 -- # set +x 00:14:17.614 ************************************ 00:14:17.614 END TEST nvmf_abort 00:14:17.614 ************************************ 00:14:17.614 21:19:32 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:17.614 21:19:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:17.614 21:19:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:17.614 21:19:32 -- common/autotest_common.sh@10 -- # set +x 00:14:17.614 ************************************ 00:14:17.614 START TEST nvmf_ns_hotplug_stress 00:14:17.614 ************************************ 00:14:17.614 21:19:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:17.614 * Looking for test storage... 00:14:17.614 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:17.614 21:19:32 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.614 21:19:32 -- nvmf/common.sh@7 -- # uname -s 00:14:17.614 21:19:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.614 21:19:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.614 21:19:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.614 21:19:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.614 21:19:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.614 21:19:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.614 21:19:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.614 21:19:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.614 21:19:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.614 21:19:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.614 21:19:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:14:17.614 21:19:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:14:17.614 21:19:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.614 21:19:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.614 21:19:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:17.614 21:19:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.614 21:19:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:17.614 21:19:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.614 21:19:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.614 21:19:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.614 21:19:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.614 21:19:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.614 21:19:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.614 21:19:32 -- paths/export.sh@5 -- # export PATH 00:14:17.614 21:19:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.614 21:19:32 -- nvmf/common.sh@47 -- # : 0 00:14:17.614 21:19:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.614 21:19:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.614 21:19:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.614 21:19:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.614 21:19:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.614 21:19:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.614 21:19:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.614 21:19:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.614 21:19:32 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:14:17.614 21:19:32 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:14:17.614 21:19:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:17.614 21:19:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.614 21:19:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:17.614 21:19:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:17.614 21:19:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:17.614 21:19:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.614 21:19:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.614 21:19:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.614 21:19:32 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:14:17.614 21:19:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:17.614 21:19:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:17.614 21:19:32 -- common/autotest_common.sh@10 -- # set +x 00:14:24.193 21:19:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:24.193 21:19:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.193 21:19:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.193 21:19:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.193 21:19:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.193 21:19:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.193 21:19:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.193 21:19:38 -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.193 21:19:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.193 21:19:38 -- nvmf/common.sh@296 -- # e810=() 00:14:24.193 21:19:38 -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.193 21:19:38 -- nvmf/common.sh@297 -- # x722=() 00:14:24.193 21:19:38 -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.193 21:19:38 -- nvmf/common.sh@298 -- # mlx=() 00:14:24.193 21:19:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.193 21:19:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.193 21:19:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.193 21:19:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.193 21:19:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.193 21:19:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.193 21:19:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.193 21:19:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.193 21:19:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.193 21:19:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.193 21:19:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.193 21:19:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.193 21:19:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.193 21:19:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.193 21:19:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.193 21:19:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:24.193 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:24.193 21:19:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.193 21:19:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:24.193 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:24.193 21:19:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.193 21:19:38 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.193 21:19:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.193 21:19:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:24.193 21:19:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.193 21:19:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:24.193 Found net devices under 0000:27:00.0: cvl_0_0 00:14:24.193 21:19:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.193 21:19:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.193 21:19:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.193 21:19:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:24.193 21:19:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.193 21:19:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:24.193 Found net devices under 0000:27:00.1: cvl_0_1 00:14:24.193 21:19:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.193 21:19:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:24.193 21:19:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:24.193 21:19:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:24.193 21:19:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.193 21:19:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.193 21:19:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.193 21:19:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:24.193 21:19:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.193 21:19:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.193 21:19:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:24.193 21:19:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.193 21:19:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.193 21:19:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:24.193 21:19:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:24.193 21:19:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.193 21:19:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.193 21:19:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.193 21:19:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.193 21:19:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:24.193 21:19:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.193 21:19:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.193 21:19:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.193 21:19:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:24.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:14:24.193 00:14:24.193 --- 10.0.0.2 ping statistics --- 00:14:24.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.193 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:14:24.193 21:19:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:14:24.193 00:14:24.193 --- 10.0.0.1 ping statistics --- 00:14:24.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.193 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:14:24.193 21:19:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.193 21:19:38 -- nvmf/common.sh@411 -- # return 0 00:14:24.193 21:19:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:24.193 21:19:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.193 21:19:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:24.193 21:19:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.193 21:19:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:24.193 21:19:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:24.193 21:19:38 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:14:24.193 21:19:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:24.193 21:19:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:24.194 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:14:24.194 21:19:38 -- nvmf/common.sh@470 -- # nvmfpid=1144049 00:14:24.194 21:19:38 -- nvmf/common.sh@471 -- # waitforlisten 1144049 00:14:24.194 21:19:38 -- common/autotest_common.sh@817 -- # '[' -z 1144049 ']' 00:14:24.194 21:19:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.194 21:19:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:24.194 21:19:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.194 21:19:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:24.194 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:14:24.194 21:19:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:24.194 [2024-04-24 21:19:38.556383] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:14:24.194 [2024-04-24 21:19:38.556504] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.194 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.194 [2024-04-24 21:19:38.679568] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:24.194 [2024-04-24 21:19:38.775888] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.194 [2024-04-24 21:19:38.775926] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.194 [2024-04-24 21:19:38.775935] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.194 [2024-04-24 21:19:38.775945] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.194 [2024-04-24 21:19:38.775953] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.194 [2024-04-24 21:19:38.776104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.194 [2024-04-24 21:19:38.776213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.194 [2024-04-24 21:19:38.776224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.453 21:19:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:24.454 21:19:39 -- common/autotest_common.sh@850 -- # return 0 00:14:24.454 21:19:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:24.454 21:19:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:24.454 21:19:39 -- common/autotest_common.sh@10 -- # set +x 00:14:24.454 21:19:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.454 21:19:39 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:14:24.454 21:19:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:24.729 [2024-04-24 21:19:39.444080] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.729 21:19:39 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:24.729 21:19:39 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.002 [2024-04-24 21:19:39.782960] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.002 21:19:39 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:25.002 21:19:39 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:25.264 Malloc0 00:14:25.264 21:19:40 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:25.525 Delay0 00:14:25.525 21:19:40 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.525 21:19:40 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:25.785 NULL1 00:14:25.785 21:19:40 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:25.785 21:19:40 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:25.785 21:19:40 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1144396 00:14:25.785 21:19:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:25.785 21:19:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.045 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.045 21:19:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.342 21:19:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:14:26.342 21:19:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:26.342 [2024-04-24 21:19:41.206876] bdev.c:4963:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:14:26.342 true 00:14:26.342 21:19:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:26.342 21:19:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.602 21:19:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.602 21:19:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:14:26.602 21:19:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:26.863 true 00:14:26.863 21:19:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:26.863 21:19:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.123 21:19:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.123 21:19:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:14:27.123 21:19:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:27.383 true 00:14:27.383 21:19:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:27.383 21:19:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.641 21:19:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.641 21:19:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:14:27.641 21:19:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:27.899 true 00:14:27.899 21:19:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:27.899 21:19:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.899 21:19:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.158 21:19:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:14:28.158 21:19:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:28.158 true 00:14:28.419 21:19:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:28.419 21:19:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.420 21:19:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.681 21:19:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:14:28.681 21:19:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:28.681 true 00:14:28.681 21:19:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:28.681 21:19:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.942 21:19:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.200 21:19:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:14:29.200 21:19:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:29.200 true 00:14:29.200 21:19:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:29.200 21:19:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.458 21:19:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.458 21:19:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:14:29.458 21:19:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:29.718 true 00:14:29.718 21:19:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:29.718 21:19:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.718 21:19:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.979 21:19:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:14:29.979 21:19:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:30.240 true 00:14:30.240 21:19:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:30.240 21:19:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.240 21:19:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.500 21:19:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:14:30.500 21:19:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:30.500 true 00:14:30.758 21:19:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:30.758 21:19:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.758 21:19:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.016 21:19:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:14:31.016 21:19:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:31.016 true 00:14:31.016 21:19:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:31.016 21:19:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.276 21:19:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.276 21:19:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:14:31.276 21:19:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:31.536 true 00:14:31.536 21:19:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:31.536 21:19:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.797 21:19:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.797 21:19:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:14:31.797 21:19:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:32.056 true 00:14:32.056 21:19:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:32.056 21:19:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.056 21:19:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.314 21:19:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:14:32.314 21:19:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:32.571 true 00:14:32.571 21:19:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:32.571 21:19:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.571 21:19:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.830 21:19:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:14:32.830 21:19:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:32.830 true 00:14:32.830 21:19:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:32.830 21:19:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.090 21:19:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.090 21:19:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:14:33.090 21:19:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:33.350 true 00:14:33.350 21:19:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:33.350 21:19:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.609 21:19:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.609 21:19:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:14:33.609 21:19:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:33.867 true 00:14:33.867 21:19:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:33.867 21:19:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.125 21:19:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.125 21:19:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:14:34.125 21:19:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:34.383 true 00:14:34.383 21:19:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:34.383 21:19:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.383 21:19:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.643 21:19:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:14:34.643 21:19:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:34.643 true 00:14:34.643 21:19:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:34.643 21:19:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.904 21:19:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.164 21:19:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:14:35.164 21:19:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:35.164 true 00:14:35.164 21:19:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:35.164 21:19:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.423 21:19:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.681 21:19:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:14:35.681 21:19:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:35.681 true 00:14:35.681 21:19:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:35.681 21:19:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.939 21:19:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.939 21:19:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:14:35.939 21:19:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:36.200 true 00:14:36.200 21:19:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:36.200 21:19:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.200 21:19:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.462 21:19:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:14:36.462 21:19:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:36.462 true 00:14:36.722 21:19:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:36.722 21:19:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.722 21:19:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.981 21:19:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:14:36.981 21:19:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:36.981 true 00:14:36.981 21:19:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:36.981 21:19:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.240 21:19:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.501 21:19:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:14:37.501 21:19:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:37.501 true 00:14:37.501 21:19:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:37.501 21:19:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.762 21:19:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.762 21:19:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:14:37.762 21:19:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:38.023 true 00:14:38.023 21:19:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:38.023 21:19:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.291 21:19:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.291 21:19:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:14:38.291 21:19:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:38.557 true 00:14:38.557 21:19:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:38.557 21:19:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.557 21:19:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.817 21:19:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:14:38.817 21:19:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:39.078 true 00:14:39.078 21:19:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:39.078 21:19:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.078 21:19:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.338 21:19:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:14:39.338 21:19:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:39.338 true 00:14:39.338 21:19:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:39.338 21:19:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.599 21:19:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.857 21:19:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:14:39.857 21:19:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:39.857 true 00:14:39.857 21:19:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:39.857 21:19:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.116 21:19:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.377 21:19:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:14:40.377 21:19:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:40.377 true 00:14:40.377 21:19:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:40.377 21:19:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.639 21:19:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.639 21:19:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:14:40.639 21:19:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:40.901 true 00:14:40.901 21:19:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:40.901 21:19:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.162 21:19:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.162 21:19:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:14:41.162 21:19:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:41.423 true 00:14:41.423 21:19:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:41.423 21:19:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.423 21:19:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.682 21:19:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:14:41.682 21:19:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:41.941 true 00:14:41.941 21:19:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:41.941 21:19:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.941 21:19:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.199 21:19:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:14:42.199 21:19:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:42.199 true 00:14:42.460 21:19:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:42.460 21:19:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.460 21:19:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.721 21:19:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:14:42.721 21:19:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:42.721 true 00:14:42.721 21:19:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:42.721 21:19:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.982 21:19:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.257 21:19:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:14:43.257 21:19:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:43.257 true 00:14:43.257 21:19:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:43.257 21:19:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.616 21:19:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.616 21:19:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:14:43.616 21:19:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:43.616 true 00:14:43.616 21:19:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:43.616 21:19:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.876 21:19:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.138 21:19:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:14:44.138 21:19:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:44.138 true 00:14:44.138 21:19:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:44.138 21:19:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.399 21:19:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.399 21:19:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:14:44.399 21:19:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:44.658 true 00:14:44.658 21:19:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:44.658 21:19:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.915 21:19:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.915 21:19:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:14:44.915 21:19:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:45.173 true 00:14:45.173 21:19:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:45.173 21:19:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.173 21:20:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.433 21:20:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:14:45.433 21:20:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:45.695 true 00:14:45.695 21:20:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:45.695 21:20:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.695 21:20:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.956 21:20:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:14:45.956 21:20:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:45.956 true 00:14:45.956 21:20:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:45.956 21:20:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.216 21:20:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.475 21:20:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:14:46.475 21:20:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:46.475 true 00:14:46.475 21:20:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:46.475 21:20:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.734 21:20:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.734 21:20:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:14:46.734 21:20:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:46.993 true 00:14:46.993 21:20:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:46.993 21:20:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.253 21:20:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.253 21:20:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:14:47.253 21:20:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:47.513 true 00:14:47.513 21:20:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:47.513 21:20:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.772 21:20:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.772 21:20:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:14:47.772 21:20:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:48.030 true 00:14:48.030 21:20:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:48.030 21:20:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.030 21:20:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.298 21:20:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:14:48.298 21:20:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:48.298 true 00:14:48.298 21:20:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:48.298 21:20:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.558 21:20:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.819 21:20:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:14:48.819 21:20:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:48.819 true 00:14:48.819 21:20:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:48.819 21:20:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.080 21:20:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.340 21:20:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:14:49.340 21:20:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:49.340 true 00:14:49.340 21:20:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:49.340 21:20:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.599 21:20:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.599 21:20:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:14:49.599 21:20:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:14:49.858 true 00:14:49.859 21:20:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:49.859 21:20:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.118 21:20:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.118 21:20:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:14:50.118 21:20:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:14:50.376 true 00:14:50.376 21:20:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:50.376 21:20:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.377 21:20:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.636 21:20:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:14:50.636 21:20:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:14:50.636 true 00:14:50.636 21:20:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:50.637 21:20:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.895 21:20:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.153 21:20:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1054 00:14:51.153 21:20:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:14:51.153 true 00:14:51.153 21:20:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:51.153 21:20:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.412 21:20:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.412 21:20:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1055 00:14:51.412 21:20:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:14:51.671 true 00:14:51.671 21:20:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:51.671 21:20:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.928 21:20:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.928 21:20:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1056 00:14:51.928 21:20:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:14:52.188 true 00:14:52.188 21:20:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:52.188 21:20:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.188 21:20:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.446 21:20:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1057 00:14:52.446 21:20:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:14:52.446 true 00:14:52.446 21:20:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:52.446 21:20:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.703 21:20:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.963 21:20:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1058 00:14:52.963 21:20:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:14:52.963 true 00:14:52.963 21:20:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:52.963 21:20:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.221 21:20:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.478 21:20:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1059 00:14:53.479 21:20:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:14:53.479 true 00:14:53.479 21:20:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:53.479 21:20:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.737 21:20:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.737 21:20:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1060 00:14:53.737 21:20:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:14:53.997 true 00:14:53.997 21:20:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:53.997 21:20:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.997 21:20:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.255 21:20:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1061 00:14:54.255 21:20:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:14:54.516 true 00:14:54.516 21:20:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:54.516 21:20:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.516 21:20:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.775 21:20:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1062 00:14:54.775 21:20:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:14:54.775 true 00:14:54.775 21:20:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:54.775 21:20:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.033 21:20:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.292 21:20:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1063 00:14:55.292 21:20:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:14:55.292 true 00:14:55.292 21:20:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:55.292 21:20:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.552 21:20:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.552 21:20:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1064 00:14:55.552 21:20:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1064 00:14:55.811 true 00:14:55.811 21:20:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:55.811 21:20:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.072 21:20:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.072 21:20:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1065 00:14:56.072 21:20:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1065 00:14:56.072 Initializing NVMe Controllers 00:14:56.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:56.072 Controller IO queue size 128, less than required. 00:14:56.072 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:56.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:56.072 Initialization complete. Launching workers. 00:14:56.072 ======================================================== 00:14:56.072 Latency(us) 00:14:56.072 Device Information : IOPS MiB/s Average min max 00:14:56.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27691.78 13.52 4622.47 2255.14 44166.92 00:14:56.072 ======================================================== 00:14:56.072 Total : 27691.78 13.52 4622.47 2255.14 44166.92 00:14:56.072 00:14:56.332 true 00:14:56.332 21:20:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1144396 00:14:56.332 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1144396) - No such process 00:14:56.332 21:20:11 -- target/ns_hotplug_stress.sh@44 -- # wait 1144396 00:14:56.332 21:20:11 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:56.332 21:20:11 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:14:56.332 21:20:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:56.332 21:20:11 -- nvmf/common.sh@117 -- # sync 00:14:56.332 21:20:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.332 21:20:11 -- nvmf/common.sh@120 -- # set +e 00:14:56.332 21:20:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.332 21:20:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.332 rmmod nvme_tcp 00:14:56.332 rmmod nvme_fabrics 00:14:56.332 rmmod nvme_keyring 00:14:56.332 21:20:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.332 21:20:11 -- nvmf/common.sh@124 -- # set -e 00:14:56.332 21:20:11 -- nvmf/common.sh@125 -- # return 0 00:14:56.332 21:20:11 -- nvmf/common.sh@478 -- # '[' -n 1144049 ']' 00:14:56.332 21:20:11 -- nvmf/common.sh@479 -- # killprocess 1144049 00:14:56.332 21:20:11 -- common/autotest_common.sh@936 -- # '[' -z 1144049 ']' 00:14:56.332 21:20:11 -- common/autotest_common.sh@940 -- # kill -0 1144049 00:14:56.332 21:20:11 -- common/autotest_common.sh@941 -- # uname 00:14:56.332 21:20:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.332 21:20:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1144049 00:14:56.332 21:20:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:56.332 21:20:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:56.332 21:20:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1144049' 00:14:56.332 killing process with pid 1144049 00:14:56.332 21:20:11 -- common/autotest_common.sh@955 -- # kill 1144049 00:14:56.332 21:20:11 -- common/autotest_common.sh@960 -- # wait 1144049 00:14:56.901 21:20:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:56.902 21:20:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:56.902 21:20:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:56.902 21:20:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.902 21:20:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.902 21:20:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.902 21:20:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.902 21:20:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.807 21:20:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:58.807 00:14:58.807 real 0m41.609s 00:14:58.807 user 2m33.795s 00:14:58.807 sys 0m12.085s 00:14:58.807 21:20:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:58.807 21:20:13 -- common/autotest_common.sh@10 -- # set +x 00:14:58.807 ************************************ 00:14:58.807 END TEST nvmf_ns_hotplug_stress 00:14:58.807 ************************************ 00:14:59.068 21:20:13 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:59.068 21:20:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:59.068 21:20:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:59.068 21:20:13 -- common/autotest_common.sh@10 -- # set +x 00:14:59.068 ************************************ 00:14:59.068 START TEST nvmf_connect_stress 00:14:59.068 ************************************ 00:14:59.068 21:20:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:59.068 * Looking for test storage... 00:14:59.068 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:59.068 21:20:14 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.068 21:20:14 -- nvmf/common.sh@7 -- # uname -s 00:14:59.068 21:20:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.068 21:20:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.068 21:20:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.068 21:20:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.068 21:20:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.068 21:20:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.068 21:20:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.068 21:20:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.068 21:20:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.068 21:20:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.068 21:20:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:14:59.068 21:20:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:14:59.068 21:20:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.068 21:20:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.068 21:20:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:59.068 21:20:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.068 21:20:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:59.068 21:20:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.068 21:20:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.068 21:20:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.068 21:20:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.069 21:20:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.069 21:20:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.069 21:20:14 -- paths/export.sh@5 -- # export PATH 00:14:59.069 21:20:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.069 21:20:14 -- nvmf/common.sh@47 -- # : 0 00:14:59.069 21:20:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.069 21:20:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.069 21:20:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.069 21:20:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.069 21:20:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.069 21:20:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.069 21:20:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.069 21:20:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.069 21:20:14 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:59.069 21:20:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:59.069 21:20:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.069 21:20:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:59.069 21:20:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:59.069 21:20:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:59.069 21:20:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.069 21:20:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.069 21:20:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.069 21:20:14 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:14:59.069 21:20:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:59.069 21:20:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:59.069 21:20:14 -- common/autotest_common.sh@10 -- # set +x 00:15:05.651 21:20:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:05.651 21:20:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:05.651 21:20:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:05.651 21:20:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:05.651 21:20:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:05.651 21:20:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:05.651 21:20:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:05.651 21:20:19 -- nvmf/common.sh@295 -- # net_devs=() 00:15:05.651 21:20:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:05.651 21:20:19 -- nvmf/common.sh@296 -- # e810=() 00:15:05.651 21:20:19 -- nvmf/common.sh@296 -- # local -ga e810 00:15:05.651 21:20:19 -- nvmf/common.sh@297 -- # x722=() 00:15:05.651 21:20:19 -- nvmf/common.sh@297 -- # local -ga x722 00:15:05.651 21:20:19 -- nvmf/common.sh@298 -- # mlx=() 00:15:05.651 21:20:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:05.651 21:20:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:05.651 21:20:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:05.651 21:20:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:05.651 21:20:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:05.651 21:20:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:05.651 21:20:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:05.651 21:20:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:05.651 21:20:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:05.651 21:20:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:05.651 21:20:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:05.651 21:20:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:05.651 21:20:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:05.651 21:20:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:05.651 21:20:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.651 21:20:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:05.651 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:05.651 21:20:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.651 21:20:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:05.651 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:05.651 21:20:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:05.651 21:20:19 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:05.651 21:20:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.651 21:20:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.651 21:20:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:05.651 21:20:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.651 21:20:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:05.651 Found net devices under 0000:27:00.0: cvl_0_0 00:15:05.651 21:20:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.651 21:20:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.651 21:20:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.651 21:20:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:05.651 21:20:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.651 21:20:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:05.651 Found net devices under 0000:27:00.1: cvl_0_1 00:15:05.651 21:20:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.651 21:20:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:05.651 21:20:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:05.652 21:20:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:05.652 21:20:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:05.652 21:20:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:05.652 21:20:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.652 21:20:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.652 21:20:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:05.652 21:20:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:05.652 21:20:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:05.652 21:20:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:05.652 21:20:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:05.652 21:20:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:05.652 21:20:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.652 21:20:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:05.652 21:20:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:05.652 21:20:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:05.652 21:20:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:05.652 21:20:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:05.652 21:20:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:05.652 21:20:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:05.652 21:20:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:05.652 21:20:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:05.652 21:20:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:05.652 21:20:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:05.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:15:05.652 00:15:05.652 --- 10.0.0.2 ping statistics --- 00:15:05.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.652 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:15:05.652 21:20:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:15:05.652 00:15:05.652 --- 10.0.0.1 ping statistics --- 00:15:05.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.652 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:15:05.652 21:20:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.652 21:20:19 -- nvmf/common.sh@411 -- # return 0 00:15:05.652 21:20:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:05.652 21:20:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.652 21:20:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:05.652 21:20:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:05.652 21:20:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.652 21:20:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:05.652 21:20:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:05.652 21:20:19 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:05.652 21:20:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:05.652 21:20:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:05.652 21:20:19 -- common/autotest_common.sh@10 -- # set +x 00:15:05.652 21:20:19 -- nvmf/common.sh@470 -- # nvmfpid=1154424 00:15:05.652 21:20:19 -- nvmf/common.sh@471 -- # waitforlisten 1154424 00:15:05.652 21:20:19 -- common/autotest_common.sh@817 -- # '[' -z 1154424 ']' 00:15:05.652 21:20:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.652 21:20:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:05.652 21:20:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:05.652 21:20:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.652 21:20:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:05.652 21:20:19 -- common/autotest_common.sh@10 -- # set +x 00:15:05.652 [2024-04-24 21:20:19.919501] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:15:05.652 [2024-04-24 21:20:19.919607] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.652 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.652 [2024-04-24 21:20:20.052566] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:05.652 [2024-04-24 21:20:20.163959] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.652 [2024-04-24 21:20:20.164000] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.652 [2024-04-24 21:20:20.164010] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.652 [2024-04-24 21:20:20.164023] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.652 [2024-04-24 21:20:20.164030] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.652 [2024-04-24 21:20:20.164184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.652 [2024-04-24 21:20:20.164314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.652 [2024-04-24 21:20:20.164323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.912 21:20:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:05.913 21:20:20 -- common/autotest_common.sh@850 -- # return 0 00:15:05.913 21:20:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:05.913 21:20:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:05.913 21:20:20 -- common/autotest_common.sh@10 -- # set +x 00:15:05.913 21:20:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.913 21:20:20 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:05.913 21:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.913 21:20:20 -- common/autotest_common.sh@10 -- # set +x 00:15:05.913 [2024-04-24 21:20:20.687857] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.913 21:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.913 21:20:20 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:05.913 21:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.913 21:20:20 -- common/autotest_common.sh@10 -- # set +x 00:15:05.913 21:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.913 21:20:20 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.913 21:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.913 21:20:20 -- common/autotest_common.sh@10 -- # set +x 00:15:05.913 [2024-04-24 21:20:20.731008] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.913 21:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.913 21:20:20 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:05.913 21:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.913 21:20:20 -- common/autotest_common.sh@10 -- # set +x 00:15:05.913 NULL1 00:15:05.913 21:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.913 21:20:20 -- target/connect_stress.sh@21 -- # PERF_PID=1154739 00:15:05.913 21:20:20 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:05.913 21:20:20 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:05.913 21:20:20 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # seq 1 20 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:05.913 21:20:20 -- target/connect_stress.sh@28 -- # cat 00:15:05.913 21:20:20 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:05.913 21:20:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.913 21:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.913 21:20:20 -- common/autotest_common.sh@10 -- # set +x 00:15:05.913 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.483 21:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.483 21:20:21 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:06.483 21:20:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.483 21:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.483 21:20:21 -- common/autotest_common.sh@10 -- # set +x 00:15:06.743 21:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.743 21:20:21 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:06.743 21:20:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.743 21:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.743 21:20:21 -- common/autotest_common.sh@10 -- # set +x 00:15:07.003 21:20:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:07.003 21:20:21 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:07.003 21:20:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.003 21:20:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.003 21:20:21 -- common/autotest_common.sh@10 -- # set +x 00:15:07.263 21:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:07.263 21:20:22 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:07.263 21:20:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.263 21:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.263 21:20:22 -- common/autotest_common.sh@10 -- # set +x 00:15:07.523 21:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:07.523 21:20:22 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:07.523 21:20:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.523 21:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.523 21:20:22 -- common/autotest_common.sh@10 -- # set +x 00:15:07.783 21:20:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:07.783 21:20:22 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:07.783 21:20:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.783 21:20:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.783 21:20:22 -- common/autotest_common.sh@10 -- # set +x 00:15:08.352 21:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:08.352 21:20:23 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:08.352 21:20:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.352 21:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:08.352 21:20:23 -- common/autotest_common.sh@10 -- # set +x 00:15:08.611 21:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:08.611 21:20:23 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:08.611 21:20:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.611 21:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:08.611 21:20:23 -- common/autotest_common.sh@10 -- # set +x 00:15:08.872 21:20:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:08.872 21:20:23 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:08.872 21:20:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.872 21:20:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:08.872 21:20:23 -- common/autotest_common.sh@10 -- # set +x 00:15:09.133 21:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.133 21:20:24 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:09.133 21:20:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.133 21:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.133 21:20:24 -- common/autotest_common.sh@10 -- # set +x 00:15:09.395 21:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.395 21:20:24 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:09.395 21:20:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.395 21:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.395 21:20:24 -- common/autotest_common.sh@10 -- # set +x 00:15:09.967 21:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.967 21:20:24 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:09.967 21:20:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.967 21:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.967 21:20:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.226 21:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.226 21:20:24 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:10.226 21:20:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.226 21:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.226 21:20:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.485 21:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.485 21:20:25 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:10.485 21:20:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.485 21:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.485 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:15:10.745 21:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:10.745 21:20:25 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:10.745 21:20:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.745 21:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:10.745 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:15:11.005 21:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.005 21:20:25 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:11.005 21:20:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.005 21:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:11.005 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:15:11.576 21:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.576 21:20:26 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:11.576 21:20:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.576 21:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:11.576 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:15:11.836 21:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.836 21:20:26 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:11.836 21:20:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.836 21:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:11.836 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:15:12.095 21:20:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.095 21:20:26 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:12.095 21:20:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.095 21:20:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.095 21:20:26 -- common/autotest_common.sh@10 -- # set +x 00:15:12.355 21:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.355 21:20:27 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:12.355 21:20:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.355 21:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.355 21:20:27 -- common/autotest_common.sh@10 -- # set +x 00:15:12.616 21:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:12.616 21:20:27 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:12.616 21:20:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.616 21:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:12.616 21:20:27 -- common/autotest_common.sh@10 -- # set +x 00:15:13.188 21:20:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:13.188 21:20:27 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:13.188 21:20:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.188 21:20:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:13.188 21:20:27 -- common/autotest_common.sh@10 -- # set +x 00:15:13.448 21:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:13.448 21:20:28 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:13.448 21:20:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.448 21:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:13.448 21:20:28 -- common/autotest_common.sh@10 -- # set +x 00:15:13.706 21:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:13.706 21:20:28 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:13.706 21:20:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.706 21:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:13.706 21:20:28 -- common/autotest_common.sh@10 -- # set +x 00:15:13.965 21:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:13.965 21:20:28 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:13.965 21:20:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.965 21:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:13.965 21:20:28 -- common/autotest_common.sh@10 -- # set +x 00:15:14.226 21:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.227 21:20:29 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:14.227 21:20:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.227 21:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.227 21:20:29 -- common/autotest_common.sh@10 -- # set +x 00:15:14.798 21:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.798 21:20:29 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:14.798 21:20:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.798 21:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.798 21:20:29 -- common/autotest_common.sh@10 -- # set +x 00:15:15.057 21:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.057 21:20:29 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:15.057 21:20:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.057 21:20:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.057 21:20:29 -- common/autotest_common.sh@10 -- # set +x 00:15:15.315 21:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.315 21:20:30 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:15.315 21:20:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.315 21:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.315 21:20:30 -- common/autotest_common.sh@10 -- # set +x 00:15:15.575 21:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.575 21:20:30 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:15.575 21:20:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.575 21:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.575 21:20:30 -- common/autotest_common.sh@10 -- # set +x 00:15:15.885 21:20:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.885 21:20:30 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:15.885 21:20:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.885 21:20:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.885 21:20:30 -- common/autotest_common.sh@10 -- # set +x 00:15:16.186 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:16.186 21:20:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.186 21:20:31 -- target/connect_stress.sh@34 -- # kill -0 1154739 00:15:16.186 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1154739) - No such process 00:15:16.186 21:20:31 -- target/connect_stress.sh@38 -- # wait 1154739 00:15:16.186 21:20:31 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:16.186 21:20:31 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:16.186 21:20:31 -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:16.186 21:20:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:16.186 21:20:31 -- nvmf/common.sh@117 -- # sync 00:15:16.186 21:20:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.186 21:20:31 -- nvmf/common.sh@120 -- # set +e 00:15:16.186 21:20:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.186 21:20:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.186 rmmod nvme_tcp 00:15:16.186 rmmod nvme_fabrics 00:15:16.186 rmmod nvme_keyring 00:15:16.448 21:20:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.448 21:20:31 -- nvmf/common.sh@124 -- # set -e 00:15:16.448 21:20:31 -- nvmf/common.sh@125 -- # return 0 00:15:16.448 21:20:31 -- nvmf/common.sh@478 -- # '[' -n 1154424 ']' 00:15:16.448 21:20:31 -- nvmf/common.sh@479 -- # killprocess 1154424 00:15:16.448 21:20:31 -- common/autotest_common.sh@936 -- # '[' -z 1154424 ']' 00:15:16.448 21:20:31 -- common/autotest_common.sh@940 -- # kill -0 1154424 00:15:16.448 21:20:31 -- common/autotest_common.sh@941 -- # uname 00:15:16.448 21:20:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:16.448 21:20:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1154424 00:15:16.448 21:20:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:16.448 21:20:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:16.448 21:20:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1154424' 00:15:16.448 killing process with pid 1154424 00:15:16.448 21:20:31 -- common/autotest_common.sh@955 -- # kill 1154424 00:15:16.448 21:20:31 -- common/autotest_common.sh@960 -- # wait 1154424 00:15:16.707 21:20:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:16.707 21:20:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:16.707 21:20:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:16.707 21:20:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.707 21:20:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.707 21:20:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.707 21:20:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.707 21:20:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.245 21:20:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:19.245 00:15:19.245 real 0m19.815s 00:15:19.245 user 0m43.836s 00:15:19.245 sys 0m6.194s 00:15:19.245 21:20:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:19.245 21:20:33 -- common/autotest_common.sh@10 -- # set +x 00:15:19.245 ************************************ 00:15:19.245 END TEST nvmf_connect_stress 00:15:19.245 ************************************ 00:15:19.245 21:20:33 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:19.245 21:20:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:19.245 21:20:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:19.245 21:20:33 -- common/autotest_common.sh@10 -- # set +x 00:15:19.245 ************************************ 00:15:19.245 START TEST nvmf_fused_ordering 00:15:19.245 ************************************ 00:15:19.245 21:20:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:19.245 * Looking for test storage... 00:15:19.245 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:19.245 21:20:33 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.245 21:20:33 -- nvmf/common.sh@7 -- # uname -s 00:15:19.245 21:20:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.245 21:20:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.245 21:20:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.245 21:20:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.245 21:20:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.245 21:20:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.245 21:20:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.246 21:20:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.246 21:20:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.246 21:20:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.246 21:20:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:15:19.246 21:20:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:15:19.246 21:20:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.246 21:20:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.246 21:20:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:19.246 21:20:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.246 21:20:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:19.246 21:20:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.246 21:20:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.246 21:20:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.246 21:20:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.246 21:20:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.246 21:20:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.246 21:20:33 -- paths/export.sh@5 -- # export PATH 00:15:19.246 21:20:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.246 21:20:33 -- nvmf/common.sh@47 -- # : 0 00:15:19.246 21:20:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:19.246 21:20:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:19.246 21:20:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.246 21:20:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.246 21:20:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.246 21:20:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:19.246 21:20:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:19.246 21:20:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:19.246 21:20:33 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:19.246 21:20:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:19.246 21:20:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.246 21:20:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:19.246 21:20:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:19.246 21:20:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:19.246 21:20:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.246 21:20:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.246 21:20:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.246 21:20:33 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:15:19.246 21:20:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:19.246 21:20:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:19.246 21:20:33 -- common/autotest_common.sh@10 -- # set +x 00:15:24.522 21:20:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:24.522 21:20:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:24.522 21:20:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:24.522 21:20:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:24.522 21:20:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:24.522 21:20:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:24.522 21:20:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:24.522 21:20:39 -- nvmf/common.sh@295 -- # net_devs=() 00:15:24.522 21:20:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:24.522 21:20:39 -- nvmf/common.sh@296 -- # e810=() 00:15:24.522 21:20:39 -- nvmf/common.sh@296 -- # local -ga e810 00:15:24.522 21:20:39 -- nvmf/common.sh@297 -- # x722=() 00:15:24.522 21:20:39 -- nvmf/common.sh@297 -- # local -ga x722 00:15:24.522 21:20:39 -- nvmf/common.sh@298 -- # mlx=() 00:15:24.522 21:20:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:24.522 21:20:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:24.522 21:20:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:24.522 21:20:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:24.522 21:20:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:24.522 21:20:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:24.522 21:20:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:24.522 21:20:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:24.522 21:20:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:24.522 21:20:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:24.522 21:20:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:24.522 21:20:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:24.522 21:20:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:24.522 21:20:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:24.522 21:20:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.522 21:20:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:24.522 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:24.522 21:20:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.522 21:20:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:24.522 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:24.522 21:20:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:24.522 21:20:39 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.522 21:20:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.522 21:20:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:24.522 21:20:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.522 21:20:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:24.522 Found net devices under 0000:27:00.0: cvl_0_0 00:15:24.522 21:20:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.522 21:20:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.522 21:20:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.522 21:20:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:24.522 21:20:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.522 21:20:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:24.522 Found net devices under 0000:27:00.1: cvl_0_1 00:15:24.522 21:20:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.522 21:20:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:24.522 21:20:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:24.522 21:20:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:24.522 21:20:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.522 21:20:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.522 21:20:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:24.522 21:20:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:24.522 21:20:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:24.522 21:20:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:24.522 21:20:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:24.522 21:20:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:24.522 21:20:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.522 21:20:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:24.522 21:20:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:24.522 21:20:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:24.522 21:20:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:24.522 21:20:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:24.522 21:20:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:24.522 21:20:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:24.522 21:20:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:24.522 21:20:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:24.522 21:20:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:24.522 21:20:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:24.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:15:24.522 00:15:24.522 --- 10.0.0.2 ping statistics --- 00:15:24.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.522 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:15:24.522 21:20:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:24.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:15:24.522 00:15:24.522 --- 10.0.0.1 ping statistics --- 00:15:24.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.522 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:15:24.522 21:20:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.522 21:20:39 -- nvmf/common.sh@411 -- # return 0 00:15:24.522 21:20:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:24.522 21:20:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.522 21:20:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:24.522 21:20:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.522 21:20:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:24.522 21:20:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:24.522 21:20:39 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:24.522 21:20:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:24.522 21:20:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:24.522 21:20:39 -- common/autotest_common.sh@10 -- # set +x 00:15:24.522 21:20:39 -- nvmf/common.sh@470 -- # nvmfpid=1160738 00:15:24.522 21:20:39 -- nvmf/common.sh@471 -- # waitforlisten 1160738 00:15:24.522 21:20:39 -- common/autotest_common.sh@817 -- # '[' -z 1160738 ']' 00:15:24.522 21:20:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.522 21:20:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:24.522 21:20:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.522 21:20:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:24.522 21:20:39 -- common/autotest_common.sh@10 -- # set +x 00:15:24.522 21:20:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:24.781 [2024-04-24 21:20:39.533509] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:15:24.782 [2024-04-24 21:20:39.533614] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.782 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.782 [2024-04-24 21:20:39.656430] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.040 [2024-04-24 21:20:39.751642] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.040 [2024-04-24 21:20:39.751674] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.040 [2024-04-24 21:20:39.751686] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.040 [2024-04-24 21:20:39.751695] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.040 [2024-04-24 21:20:39.751702] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.040 [2024-04-24 21:20:39.751726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.298 21:20:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:25.298 21:20:40 -- common/autotest_common.sh@850 -- # return 0 00:15:25.298 21:20:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:25.298 21:20:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:25.298 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:15:25.558 21:20:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.558 21:20:40 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.558 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.558 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:15:25.558 [2024-04-24 21:20:40.270823] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.558 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.558 21:20:40 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:25.558 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.558 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:15:25.558 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.558 21:20:40 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.558 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.558 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:15:25.558 [2024-04-24 21:20:40.286973] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.558 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.558 21:20:40 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:25.558 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.558 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:15:25.558 NULL1 00:15:25.558 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.558 21:20:40 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:25.558 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.558 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:15:25.558 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.558 21:20:40 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:25.558 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.558 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:15:25.558 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.558 21:20:40 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:25.558 [2024-04-24 21:20:40.352263] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:15:25.558 [2024-04-24 21:20:40.352344] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160785 ] 00:15:25.558 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.818 Attached to nqn.2016-06.io.spdk:cnode1 00:15:25.818 Namespace ID: 1 size: 1GB 00:15:25.818 fused_ordering(0) 00:15:25.818 fused_ordering(1) 00:15:25.818 fused_ordering(2) 00:15:25.818 fused_ordering(3) 00:15:25.818 fused_ordering(4) 00:15:25.818 fused_ordering(5) 00:15:25.818 fused_ordering(6) 00:15:25.818 fused_ordering(7) 00:15:25.818 fused_ordering(8) 00:15:25.818 fused_ordering(9) 00:15:25.818 fused_ordering(10) 00:15:25.818 fused_ordering(11) 00:15:25.818 fused_ordering(12) 00:15:25.818 fused_ordering(13) 00:15:25.818 fused_ordering(14) 00:15:25.818 fused_ordering(15) 00:15:25.818 fused_ordering(16) 00:15:25.818 fused_ordering(17) 00:15:25.818 fused_ordering(18) 00:15:25.818 fused_ordering(19) 00:15:25.818 fused_ordering(20) 00:15:25.818 fused_ordering(21) 00:15:25.818 fused_ordering(22) 00:15:25.818 fused_ordering(23) 00:15:25.818 fused_ordering(24) 00:15:25.818 fused_ordering(25) 00:15:25.818 fused_ordering(26) 00:15:25.818 fused_ordering(27) 00:15:25.818 fused_ordering(28) 00:15:25.818 fused_ordering(29) 00:15:25.818 fused_ordering(30) 00:15:25.818 fused_ordering(31) 00:15:25.818 fused_ordering(32) 00:15:25.818 fused_ordering(33) 00:15:25.818 fused_ordering(34) 00:15:25.818 fused_ordering(35) 00:15:25.818 fused_ordering(36) 00:15:25.818 fused_ordering(37) 00:15:25.818 fused_ordering(38) 00:15:25.818 fused_ordering(39) 00:15:25.818 fused_ordering(40) 00:15:25.818 fused_ordering(41) 00:15:25.818 fused_ordering(42) 00:15:25.818 fused_ordering(43) 00:15:25.818 fused_ordering(44) 00:15:25.818 fused_ordering(45) 00:15:25.818 fused_ordering(46) 00:15:25.818 fused_ordering(47) 00:15:25.818 fused_ordering(48) 00:15:25.818 fused_ordering(49) 00:15:25.818 fused_ordering(50) 00:15:25.818 fused_ordering(51) 00:15:25.818 fused_ordering(52) 00:15:25.818 fused_ordering(53) 00:15:25.818 fused_ordering(54) 00:15:25.818 fused_ordering(55) 00:15:25.818 fused_ordering(56) 00:15:25.818 fused_ordering(57) 00:15:25.818 fused_ordering(58) 00:15:25.818 fused_ordering(59) 00:15:25.818 fused_ordering(60) 00:15:25.818 fused_ordering(61) 00:15:25.818 fused_ordering(62) 00:15:25.818 fused_ordering(63) 00:15:25.818 fused_ordering(64) 00:15:25.818 fused_ordering(65) 00:15:25.818 fused_ordering(66) 00:15:25.818 fused_ordering(67) 00:15:25.818 fused_ordering(68) 00:15:25.818 fused_ordering(69) 00:15:25.818 fused_ordering(70) 00:15:25.818 fused_ordering(71) 00:15:25.818 fused_ordering(72) 00:15:25.818 fused_ordering(73) 00:15:25.818 fused_ordering(74) 00:15:25.818 fused_ordering(75) 00:15:25.818 fused_ordering(76) 00:15:25.818 fused_ordering(77) 00:15:25.818 fused_ordering(78) 00:15:25.818 fused_ordering(79) 00:15:25.818 fused_ordering(80) 00:15:25.818 fused_ordering(81) 00:15:25.818 fused_ordering(82) 00:15:25.818 fused_ordering(83) 00:15:25.818 fused_ordering(84) 00:15:25.818 fused_ordering(85) 00:15:25.818 fused_ordering(86) 00:15:25.818 fused_ordering(87) 00:15:25.818 fused_ordering(88) 00:15:25.818 fused_ordering(89) 00:15:25.818 fused_ordering(90) 00:15:25.818 fused_ordering(91) 00:15:25.818 fused_ordering(92) 00:15:25.818 fused_ordering(93) 00:15:25.818 fused_ordering(94) 00:15:25.818 fused_ordering(95) 00:15:25.818 fused_ordering(96) 00:15:25.818 fused_ordering(97) 00:15:25.818 fused_ordering(98) 00:15:25.818 fused_ordering(99) 00:15:25.818 fused_ordering(100) 00:15:25.818 fused_ordering(101) 00:15:25.818 fused_ordering(102) 00:15:25.818 fused_ordering(103) 00:15:25.818 fused_ordering(104) 00:15:25.818 fused_ordering(105) 00:15:25.818 fused_ordering(106) 00:15:25.818 fused_ordering(107) 00:15:25.818 fused_ordering(108) 00:15:25.818 fused_ordering(109) 00:15:25.818 fused_ordering(110) 00:15:25.818 fused_ordering(111) 00:15:25.818 fused_ordering(112) 00:15:25.818 fused_ordering(113) 00:15:25.818 fused_ordering(114) 00:15:25.818 fused_ordering(115) 00:15:25.818 fused_ordering(116) 00:15:25.818 fused_ordering(117) 00:15:25.818 fused_ordering(118) 00:15:25.818 fused_ordering(119) 00:15:25.818 fused_ordering(120) 00:15:25.818 fused_ordering(121) 00:15:25.818 fused_ordering(122) 00:15:25.818 fused_ordering(123) 00:15:25.818 fused_ordering(124) 00:15:25.818 fused_ordering(125) 00:15:25.818 fused_ordering(126) 00:15:25.818 fused_ordering(127) 00:15:25.818 fused_ordering(128) 00:15:25.818 fused_ordering(129) 00:15:25.818 fused_ordering(130) 00:15:25.818 fused_ordering(131) 00:15:25.818 fused_ordering(132) 00:15:25.818 fused_ordering(133) 00:15:25.818 fused_ordering(134) 00:15:25.818 fused_ordering(135) 00:15:25.818 fused_ordering(136) 00:15:25.818 fused_ordering(137) 00:15:25.818 fused_ordering(138) 00:15:25.818 fused_ordering(139) 00:15:25.818 fused_ordering(140) 00:15:25.818 fused_ordering(141) 00:15:25.818 fused_ordering(142) 00:15:25.818 fused_ordering(143) 00:15:25.818 fused_ordering(144) 00:15:25.818 fused_ordering(145) 00:15:25.818 fused_ordering(146) 00:15:25.818 fused_ordering(147) 00:15:25.818 fused_ordering(148) 00:15:25.818 fused_ordering(149) 00:15:25.818 fused_ordering(150) 00:15:25.818 fused_ordering(151) 00:15:25.818 fused_ordering(152) 00:15:25.818 fused_ordering(153) 00:15:25.818 fused_ordering(154) 00:15:25.818 fused_ordering(155) 00:15:25.818 fused_ordering(156) 00:15:25.818 fused_ordering(157) 00:15:25.818 fused_ordering(158) 00:15:25.819 fused_ordering(159) 00:15:25.819 fused_ordering(160) 00:15:25.819 fused_ordering(161) 00:15:25.819 fused_ordering(162) 00:15:25.819 fused_ordering(163) 00:15:25.819 fused_ordering(164) 00:15:25.819 fused_ordering(165) 00:15:25.819 fused_ordering(166) 00:15:25.819 fused_ordering(167) 00:15:25.819 fused_ordering(168) 00:15:25.819 fused_ordering(169) 00:15:25.819 fused_ordering(170) 00:15:25.819 fused_ordering(171) 00:15:25.819 fused_ordering(172) 00:15:25.819 fused_ordering(173) 00:15:25.819 fused_ordering(174) 00:15:25.819 fused_ordering(175) 00:15:25.819 fused_ordering(176) 00:15:25.819 fused_ordering(177) 00:15:25.819 fused_ordering(178) 00:15:25.819 fused_ordering(179) 00:15:25.819 fused_ordering(180) 00:15:25.819 fused_ordering(181) 00:15:25.819 fused_ordering(182) 00:15:25.819 fused_ordering(183) 00:15:25.819 fused_ordering(184) 00:15:25.819 fused_ordering(185) 00:15:25.819 fused_ordering(186) 00:15:25.819 fused_ordering(187) 00:15:25.819 fused_ordering(188) 00:15:25.819 fused_ordering(189) 00:15:25.819 fused_ordering(190) 00:15:25.819 fused_ordering(191) 00:15:25.819 fused_ordering(192) 00:15:25.819 fused_ordering(193) 00:15:25.819 fused_ordering(194) 00:15:25.819 fused_ordering(195) 00:15:25.819 fused_ordering(196) 00:15:25.819 fused_ordering(197) 00:15:25.819 fused_ordering(198) 00:15:25.819 fused_ordering(199) 00:15:25.819 fused_ordering(200) 00:15:25.819 fused_ordering(201) 00:15:25.819 fused_ordering(202) 00:15:25.819 fused_ordering(203) 00:15:25.819 fused_ordering(204) 00:15:25.819 fused_ordering(205) 00:15:26.080 fused_ordering(206) 00:15:26.080 fused_ordering(207) 00:15:26.080 fused_ordering(208) 00:15:26.080 fused_ordering(209) 00:15:26.080 fused_ordering(210) 00:15:26.080 fused_ordering(211) 00:15:26.080 fused_ordering(212) 00:15:26.080 fused_ordering(213) 00:15:26.080 fused_ordering(214) 00:15:26.080 fused_ordering(215) 00:15:26.080 fused_ordering(216) 00:15:26.080 fused_ordering(217) 00:15:26.080 fused_ordering(218) 00:15:26.080 fused_ordering(219) 00:15:26.080 fused_ordering(220) 00:15:26.081 fused_ordering(221) 00:15:26.081 fused_ordering(222) 00:15:26.081 fused_ordering(223) 00:15:26.081 fused_ordering(224) 00:15:26.081 fused_ordering(225) 00:15:26.081 fused_ordering(226) 00:15:26.081 fused_ordering(227) 00:15:26.081 fused_ordering(228) 00:15:26.081 fused_ordering(229) 00:15:26.081 fused_ordering(230) 00:15:26.081 fused_ordering(231) 00:15:26.081 fused_ordering(232) 00:15:26.081 fused_ordering(233) 00:15:26.081 fused_ordering(234) 00:15:26.081 fused_ordering(235) 00:15:26.081 fused_ordering(236) 00:15:26.081 fused_ordering(237) 00:15:26.081 fused_ordering(238) 00:15:26.081 fused_ordering(239) 00:15:26.081 fused_ordering(240) 00:15:26.081 fused_ordering(241) 00:15:26.081 fused_ordering(242) 00:15:26.081 fused_ordering(243) 00:15:26.081 fused_ordering(244) 00:15:26.081 fused_ordering(245) 00:15:26.081 fused_ordering(246) 00:15:26.081 fused_ordering(247) 00:15:26.081 fused_ordering(248) 00:15:26.081 fused_ordering(249) 00:15:26.081 fused_ordering(250) 00:15:26.081 fused_ordering(251) 00:15:26.081 fused_ordering(252) 00:15:26.081 fused_ordering(253) 00:15:26.081 fused_ordering(254) 00:15:26.081 fused_ordering(255) 00:15:26.081 fused_ordering(256) 00:15:26.081 fused_ordering(257) 00:15:26.081 fused_ordering(258) 00:15:26.081 fused_ordering(259) 00:15:26.081 fused_ordering(260) 00:15:26.081 fused_ordering(261) 00:15:26.081 fused_ordering(262) 00:15:26.081 fused_ordering(263) 00:15:26.081 fused_ordering(264) 00:15:26.081 fused_ordering(265) 00:15:26.081 fused_ordering(266) 00:15:26.081 fused_ordering(267) 00:15:26.081 fused_ordering(268) 00:15:26.081 fused_ordering(269) 00:15:26.081 fused_ordering(270) 00:15:26.081 fused_ordering(271) 00:15:26.081 fused_ordering(272) 00:15:26.081 fused_ordering(273) 00:15:26.081 fused_ordering(274) 00:15:26.081 fused_ordering(275) 00:15:26.081 fused_ordering(276) 00:15:26.081 fused_ordering(277) 00:15:26.081 fused_ordering(278) 00:15:26.081 fused_ordering(279) 00:15:26.081 fused_ordering(280) 00:15:26.081 fused_ordering(281) 00:15:26.081 fused_ordering(282) 00:15:26.081 fused_ordering(283) 00:15:26.081 fused_ordering(284) 00:15:26.081 fused_ordering(285) 00:15:26.081 fused_ordering(286) 00:15:26.081 fused_ordering(287) 00:15:26.081 fused_ordering(288) 00:15:26.081 fused_ordering(289) 00:15:26.081 fused_ordering(290) 00:15:26.081 fused_ordering(291) 00:15:26.081 fused_ordering(292) 00:15:26.081 fused_ordering(293) 00:15:26.081 fused_ordering(294) 00:15:26.081 fused_ordering(295) 00:15:26.081 fused_ordering(296) 00:15:26.081 fused_ordering(297) 00:15:26.081 fused_ordering(298) 00:15:26.081 fused_ordering(299) 00:15:26.081 fused_ordering(300) 00:15:26.081 fused_ordering(301) 00:15:26.081 fused_ordering(302) 00:15:26.081 fused_ordering(303) 00:15:26.081 fused_ordering(304) 00:15:26.081 fused_ordering(305) 00:15:26.081 fused_ordering(306) 00:15:26.081 fused_ordering(307) 00:15:26.081 fused_ordering(308) 00:15:26.081 fused_ordering(309) 00:15:26.081 fused_ordering(310) 00:15:26.081 fused_ordering(311) 00:15:26.081 fused_ordering(312) 00:15:26.081 fused_ordering(313) 00:15:26.081 fused_ordering(314) 00:15:26.081 fused_ordering(315) 00:15:26.081 fused_ordering(316) 00:15:26.081 fused_ordering(317) 00:15:26.081 fused_ordering(318) 00:15:26.081 fused_ordering(319) 00:15:26.081 fused_ordering(320) 00:15:26.081 fused_ordering(321) 00:15:26.081 fused_ordering(322) 00:15:26.081 fused_ordering(323) 00:15:26.081 fused_ordering(324) 00:15:26.081 fused_ordering(325) 00:15:26.081 fused_ordering(326) 00:15:26.081 fused_ordering(327) 00:15:26.081 fused_ordering(328) 00:15:26.081 fused_ordering(329) 00:15:26.081 fused_ordering(330) 00:15:26.081 fused_ordering(331) 00:15:26.081 fused_ordering(332) 00:15:26.081 fused_ordering(333) 00:15:26.081 fused_ordering(334) 00:15:26.081 fused_ordering(335) 00:15:26.081 fused_ordering(336) 00:15:26.081 fused_ordering(337) 00:15:26.081 fused_ordering(338) 00:15:26.081 fused_ordering(339) 00:15:26.081 fused_ordering(340) 00:15:26.081 fused_ordering(341) 00:15:26.081 fused_ordering(342) 00:15:26.081 fused_ordering(343) 00:15:26.081 fused_ordering(344) 00:15:26.081 fused_ordering(345) 00:15:26.081 fused_ordering(346) 00:15:26.081 fused_ordering(347) 00:15:26.081 fused_ordering(348) 00:15:26.081 fused_ordering(349) 00:15:26.081 fused_ordering(350) 00:15:26.081 fused_ordering(351) 00:15:26.081 fused_ordering(352) 00:15:26.081 fused_ordering(353) 00:15:26.081 fused_ordering(354) 00:15:26.081 fused_ordering(355) 00:15:26.081 fused_ordering(356) 00:15:26.081 fused_ordering(357) 00:15:26.081 fused_ordering(358) 00:15:26.081 fused_ordering(359) 00:15:26.081 fused_ordering(360) 00:15:26.081 fused_ordering(361) 00:15:26.081 fused_ordering(362) 00:15:26.081 fused_ordering(363) 00:15:26.081 fused_ordering(364) 00:15:26.081 fused_ordering(365) 00:15:26.081 fused_ordering(366) 00:15:26.081 fused_ordering(367) 00:15:26.081 fused_ordering(368) 00:15:26.081 fused_ordering(369) 00:15:26.081 fused_ordering(370) 00:15:26.081 fused_ordering(371) 00:15:26.081 fused_ordering(372) 00:15:26.081 fused_ordering(373) 00:15:26.081 fused_ordering(374) 00:15:26.081 fused_ordering(375) 00:15:26.081 fused_ordering(376) 00:15:26.081 fused_ordering(377) 00:15:26.081 fused_ordering(378) 00:15:26.081 fused_ordering(379) 00:15:26.081 fused_ordering(380) 00:15:26.081 fused_ordering(381) 00:15:26.081 fused_ordering(382) 00:15:26.081 fused_ordering(383) 00:15:26.081 fused_ordering(384) 00:15:26.081 fused_ordering(385) 00:15:26.081 fused_ordering(386) 00:15:26.081 fused_ordering(387) 00:15:26.081 fused_ordering(388) 00:15:26.081 fused_ordering(389) 00:15:26.081 fused_ordering(390) 00:15:26.081 fused_ordering(391) 00:15:26.081 fused_ordering(392) 00:15:26.081 fused_ordering(393) 00:15:26.081 fused_ordering(394) 00:15:26.081 fused_ordering(395) 00:15:26.081 fused_ordering(396) 00:15:26.081 fused_ordering(397) 00:15:26.081 fused_ordering(398) 00:15:26.081 fused_ordering(399) 00:15:26.081 fused_ordering(400) 00:15:26.081 fused_ordering(401) 00:15:26.081 fused_ordering(402) 00:15:26.081 fused_ordering(403) 00:15:26.081 fused_ordering(404) 00:15:26.081 fused_ordering(405) 00:15:26.081 fused_ordering(406) 00:15:26.081 fused_ordering(407) 00:15:26.081 fused_ordering(408) 00:15:26.081 fused_ordering(409) 00:15:26.081 fused_ordering(410) 00:15:26.340 fused_ordering(411) 00:15:26.340 fused_ordering(412) 00:15:26.340 fused_ordering(413) 00:15:26.340 fused_ordering(414) 00:15:26.340 fused_ordering(415) 00:15:26.340 fused_ordering(416) 00:15:26.340 fused_ordering(417) 00:15:26.340 fused_ordering(418) 00:15:26.340 fused_ordering(419) 00:15:26.340 fused_ordering(420) 00:15:26.340 fused_ordering(421) 00:15:26.340 fused_ordering(422) 00:15:26.340 fused_ordering(423) 00:15:26.340 fused_ordering(424) 00:15:26.340 fused_ordering(425) 00:15:26.340 fused_ordering(426) 00:15:26.340 fused_ordering(427) 00:15:26.340 fused_ordering(428) 00:15:26.340 fused_ordering(429) 00:15:26.340 fused_ordering(430) 00:15:26.340 fused_ordering(431) 00:15:26.340 fused_ordering(432) 00:15:26.340 fused_ordering(433) 00:15:26.340 fused_ordering(434) 00:15:26.340 fused_ordering(435) 00:15:26.340 fused_ordering(436) 00:15:26.340 fused_ordering(437) 00:15:26.340 fused_ordering(438) 00:15:26.340 fused_ordering(439) 00:15:26.340 fused_ordering(440) 00:15:26.340 fused_ordering(441) 00:15:26.340 fused_ordering(442) 00:15:26.340 fused_ordering(443) 00:15:26.340 fused_ordering(444) 00:15:26.340 fused_ordering(445) 00:15:26.340 fused_ordering(446) 00:15:26.340 fused_ordering(447) 00:15:26.340 fused_ordering(448) 00:15:26.340 fused_ordering(449) 00:15:26.340 fused_ordering(450) 00:15:26.340 fused_ordering(451) 00:15:26.340 fused_ordering(452) 00:15:26.340 fused_ordering(453) 00:15:26.340 fused_ordering(454) 00:15:26.340 fused_ordering(455) 00:15:26.340 fused_ordering(456) 00:15:26.340 fused_ordering(457) 00:15:26.341 fused_ordering(458) 00:15:26.341 fused_ordering(459) 00:15:26.341 fused_ordering(460) 00:15:26.341 fused_ordering(461) 00:15:26.341 fused_ordering(462) 00:15:26.341 fused_ordering(463) 00:15:26.341 fused_ordering(464) 00:15:26.341 fused_ordering(465) 00:15:26.341 fused_ordering(466) 00:15:26.341 fused_ordering(467) 00:15:26.341 fused_ordering(468) 00:15:26.341 fused_ordering(469) 00:15:26.341 fused_ordering(470) 00:15:26.341 fused_ordering(471) 00:15:26.341 fused_ordering(472) 00:15:26.341 fused_ordering(473) 00:15:26.341 fused_ordering(474) 00:15:26.341 fused_ordering(475) 00:15:26.341 fused_ordering(476) 00:15:26.341 fused_ordering(477) 00:15:26.341 fused_ordering(478) 00:15:26.341 fused_ordering(479) 00:15:26.341 fused_ordering(480) 00:15:26.341 fused_ordering(481) 00:15:26.341 fused_ordering(482) 00:15:26.341 fused_ordering(483) 00:15:26.341 fused_ordering(484) 00:15:26.341 fused_ordering(485) 00:15:26.341 fused_ordering(486) 00:15:26.341 fused_ordering(487) 00:15:26.341 fused_ordering(488) 00:15:26.341 fused_ordering(489) 00:15:26.341 fused_ordering(490) 00:15:26.341 fused_ordering(491) 00:15:26.341 fused_ordering(492) 00:15:26.341 fused_ordering(493) 00:15:26.341 fused_ordering(494) 00:15:26.341 fused_ordering(495) 00:15:26.341 fused_ordering(496) 00:15:26.341 fused_ordering(497) 00:15:26.341 fused_ordering(498) 00:15:26.341 fused_ordering(499) 00:15:26.341 fused_ordering(500) 00:15:26.341 fused_ordering(501) 00:15:26.341 fused_ordering(502) 00:15:26.341 fused_ordering(503) 00:15:26.341 fused_ordering(504) 00:15:26.341 fused_ordering(505) 00:15:26.341 fused_ordering(506) 00:15:26.341 fused_ordering(507) 00:15:26.341 fused_ordering(508) 00:15:26.341 fused_ordering(509) 00:15:26.341 fused_ordering(510) 00:15:26.341 fused_ordering(511) 00:15:26.341 fused_ordering(512) 00:15:26.341 fused_ordering(513) 00:15:26.341 fused_ordering(514) 00:15:26.341 fused_ordering(515) 00:15:26.341 fused_ordering(516) 00:15:26.341 fused_ordering(517) 00:15:26.341 fused_ordering(518) 00:15:26.341 fused_ordering(519) 00:15:26.341 fused_ordering(520) 00:15:26.341 fused_ordering(521) 00:15:26.341 fused_ordering(522) 00:15:26.341 fused_ordering(523) 00:15:26.341 fused_ordering(524) 00:15:26.341 fused_ordering(525) 00:15:26.341 fused_ordering(526) 00:15:26.341 fused_ordering(527) 00:15:26.341 fused_ordering(528) 00:15:26.341 fused_ordering(529) 00:15:26.341 fused_ordering(530) 00:15:26.341 fused_ordering(531) 00:15:26.341 fused_ordering(532) 00:15:26.341 fused_ordering(533) 00:15:26.341 fused_ordering(534) 00:15:26.341 fused_ordering(535) 00:15:26.341 fused_ordering(536) 00:15:26.341 fused_ordering(537) 00:15:26.341 fused_ordering(538) 00:15:26.341 fused_ordering(539) 00:15:26.341 fused_ordering(540) 00:15:26.341 fused_ordering(541) 00:15:26.341 fused_ordering(542) 00:15:26.341 fused_ordering(543) 00:15:26.341 fused_ordering(544) 00:15:26.341 fused_ordering(545) 00:15:26.341 fused_ordering(546) 00:15:26.341 fused_ordering(547) 00:15:26.341 fused_ordering(548) 00:15:26.341 fused_ordering(549) 00:15:26.341 fused_ordering(550) 00:15:26.341 fused_ordering(551) 00:15:26.341 fused_ordering(552) 00:15:26.341 fused_ordering(553) 00:15:26.341 fused_ordering(554) 00:15:26.341 fused_ordering(555) 00:15:26.341 fused_ordering(556) 00:15:26.341 fused_ordering(557) 00:15:26.341 fused_ordering(558) 00:15:26.341 fused_ordering(559) 00:15:26.341 fused_ordering(560) 00:15:26.341 fused_ordering(561) 00:15:26.341 fused_ordering(562) 00:15:26.341 fused_ordering(563) 00:15:26.341 fused_ordering(564) 00:15:26.341 fused_ordering(565) 00:15:26.341 fused_ordering(566) 00:15:26.341 fused_ordering(567) 00:15:26.341 fused_ordering(568) 00:15:26.341 fused_ordering(569) 00:15:26.341 fused_ordering(570) 00:15:26.341 fused_ordering(571) 00:15:26.341 fused_ordering(572) 00:15:26.341 fused_ordering(573) 00:15:26.341 fused_ordering(574) 00:15:26.341 fused_ordering(575) 00:15:26.341 fused_ordering(576) 00:15:26.341 fused_ordering(577) 00:15:26.341 fused_ordering(578) 00:15:26.341 fused_ordering(579) 00:15:26.341 fused_ordering(580) 00:15:26.341 fused_ordering(581) 00:15:26.341 fused_ordering(582) 00:15:26.341 fused_ordering(583) 00:15:26.341 fused_ordering(584) 00:15:26.341 fused_ordering(585) 00:15:26.341 fused_ordering(586) 00:15:26.341 fused_ordering(587) 00:15:26.341 fused_ordering(588) 00:15:26.341 fused_ordering(589) 00:15:26.341 fused_ordering(590) 00:15:26.341 fused_ordering(591) 00:15:26.341 fused_ordering(592) 00:15:26.341 fused_ordering(593) 00:15:26.341 fused_ordering(594) 00:15:26.341 fused_ordering(595) 00:15:26.341 fused_ordering(596) 00:15:26.341 fused_ordering(597) 00:15:26.341 fused_ordering(598) 00:15:26.341 fused_ordering(599) 00:15:26.341 fused_ordering(600) 00:15:26.341 fused_ordering(601) 00:15:26.341 fused_ordering(602) 00:15:26.341 fused_ordering(603) 00:15:26.341 fused_ordering(604) 00:15:26.341 fused_ordering(605) 00:15:26.341 fused_ordering(606) 00:15:26.341 fused_ordering(607) 00:15:26.341 fused_ordering(608) 00:15:26.341 fused_ordering(609) 00:15:26.341 fused_ordering(610) 00:15:26.341 fused_ordering(611) 00:15:26.341 fused_ordering(612) 00:15:26.341 fused_ordering(613) 00:15:26.341 fused_ordering(614) 00:15:26.342 fused_ordering(615) 00:15:26.910 fused_ordering(616) 00:15:26.910 fused_ordering(617) 00:15:26.910 fused_ordering(618) 00:15:26.910 fused_ordering(619) 00:15:26.910 fused_ordering(620) 00:15:26.910 fused_ordering(621) 00:15:26.910 fused_ordering(622) 00:15:26.910 fused_ordering(623) 00:15:26.910 fused_ordering(624) 00:15:26.910 fused_ordering(625) 00:15:26.910 fused_ordering(626) 00:15:26.910 fused_ordering(627) 00:15:26.910 fused_ordering(628) 00:15:26.910 fused_ordering(629) 00:15:26.910 fused_ordering(630) 00:15:26.910 fused_ordering(631) 00:15:26.910 fused_ordering(632) 00:15:26.910 fused_ordering(633) 00:15:26.910 fused_ordering(634) 00:15:26.910 fused_ordering(635) 00:15:26.910 fused_ordering(636) 00:15:26.910 fused_ordering(637) 00:15:26.910 fused_ordering(638) 00:15:26.910 fused_ordering(639) 00:15:26.910 fused_ordering(640) 00:15:26.910 fused_ordering(641) 00:15:26.910 fused_ordering(642) 00:15:26.910 fused_ordering(643) 00:15:26.910 fused_ordering(644) 00:15:26.910 fused_ordering(645) 00:15:26.910 fused_ordering(646) 00:15:26.910 fused_ordering(647) 00:15:26.910 fused_ordering(648) 00:15:26.910 fused_ordering(649) 00:15:26.910 fused_ordering(650) 00:15:26.910 fused_ordering(651) 00:15:26.910 fused_ordering(652) 00:15:26.910 fused_ordering(653) 00:15:26.910 fused_ordering(654) 00:15:26.910 fused_ordering(655) 00:15:26.910 fused_ordering(656) 00:15:26.910 fused_ordering(657) 00:15:26.910 fused_ordering(658) 00:15:26.910 fused_ordering(659) 00:15:26.910 fused_ordering(660) 00:15:26.910 fused_ordering(661) 00:15:26.910 fused_ordering(662) 00:15:26.910 fused_ordering(663) 00:15:26.910 fused_ordering(664) 00:15:26.910 fused_ordering(665) 00:15:26.910 fused_ordering(666) 00:15:26.910 fused_ordering(667) 00:15:26.910 fused_ordering(668) 00:15:26.910 fused_ordering(669) 00:15:26.910 fused_ordering(670) 00:15:26.910 fused_ordering(671) 00:15:26.910 fused_ordering(672) 00:15:26.910 fused_ordering(673) 00:15:26.910 fused_ordering(674) 00:15:26.910 fused_ordering(675) 00:15:26.910 fused_ordering(676) 00:15:26.910 fused_ordering(677) 00:15:26.910 fused_ordering(678) 00:15:26.910 fused_ordering(679) 00:15:26.910 fused_ordering(680) 00:15:26.910 fused_ordering(681) 00:15:26.910 fused_ordering(682) 00:15:26.910 fused_ordering(683) 00:15:26.910 fused_ordering(684) 00:15:26.910 fused_ordering(685) 00:15:26.910 fused_ordering(686) 00:15:26.910 fused_ordering(687) 00:15:26.910 fused_ordering(688) 00:15:26.910 fused_ordering(689) 00:15:26.910 fused_ordering(690) 00:15:26.910 fused_ordering(691) 00:15:26.910 fused_ordering(692) 00:15:26.910 fused_ordering(693) 00:15:26.910 fused_ordering(694) 00:15:26.910 fused_ordering(695) 00:15:26.910 fused_ordering(696) 00:15:26.910 fused_ordering(697) 00:15:26.910 fused_ordering(698) 00:15:26.910 fused_ordering(699) 00:15:26.910 fused_ordering(700) 00:15:26.910 fused_ordering(701) 00:15:26.910 fused_ordering(702) 00:15:26.910 fused_ordering(703) 00:15:26.910 fused_ordering(704) 00:15:26.910 fused_ordering(705) 00:15:26.910 fused_ordering(706) 00:15:26.910 fused_ordering(707) 00:15:26.910 fused_ordering(708) 00:15:26.910 fused_ordering(709) 00:15:26.910 fused_ordering(710) 00:15:26.910 fused_ordering(711) 00:15:26.910 fused_ordering(712) 00:15:26.910 fused_ordering(713) 00:15:26.910 fused_ordering(714) 00:15:26.910 fused_ordering(715) 00:15:26.910 fused_ordering(716) 00:15:26.910 fused_ordering(717) 00:15:26.910 fused_ordering(718) 00:15:26.910 fused_ordering(719) 00:15:26.910 fused_ordering(720) 00:15:26.910 fused_ordering(721) 00:15:26.910 fused_ordering(722) 00:15:26.910 fused_ordering(723) 00:15:26.910 fused_ordering(724) 00:15:26.910 fused_ordering(725) 00:15:26.910 fused_ordering(726) 00:15:26.910 fused_ordering(727) 00:15:26.910 fused_ordering(728) 00:15:26.910 fused_ordering(729) 00:15:26.910 fused_ordering(730) 00:15:26.910 fused_ordering(731) 00:15:26.910 fused_ordering(732) 00:15:26.910 fused_ordering(733) 00:15:26.910 fused_ordering(734) 00:15:26.910 fused_ordering(735) 00:15:26.910 fused_ordering(736) 00:15:26.910 fused_ordering(737) 00:15:26.910 fused_ordering(738) 00:15:26.910 fused_ordering(739) 00:15:26.910 fused_ordering(740) 00:15:26.910 fused_ordering(741) 00:15:26.910 fused_ordering(742) 00:15:26.910 fused_ordering(743) 00:15:26.910 fused_ordering(744) 00:15:26.910 fused_ordering(745) 00:15:26.910 fused_ordering(746) 00:15:26.910 fused_ordering(747) 00:15:26.910 fused_ordering(748) 00:15:26.910 fused_ordering(749) 00:15:26.910 fused_ordering(750) 00:15:26.910 fused_ordering(751) 00:15:26.910 fused_ordering(752) 00:15:26.911 fused_ordering(753) 00:15:26.911 fused_ordering(754) 00:15:26.911 fused_ordering(755) 00:15:26.911 fused_ordering(756) 00:15:26.911 fused_ordering(757) 00:15:26.911 fused_ordering(758) 00:15:26.911 fused_ordering(759) 00:15:26.911 fused_ordering(760) 00:15:26.911 fused_ordering(761) 00:15:26.911 fused_ordering(762) 00:15:26.911 fused_ordering(763) 00:15:26.911 fused_ordering(764) 00:15:26.911 fused_ordering(765) 00:15:26.911 fused_ordering(766) 00:15:26.911 fused_ordering(767) 00:15:26.911 fused_ordering(768) 00:15:26.911 fused_ordering(769) 00:15:26.911 fused_ordering(770) 00:15:26.911 fused_ordering(771) 00:15:26.911 fused_ordering(772) 00:15:26.911 fused_ordering(773) 00:15:26.911 fused_ordering(774) 00:15:26.911 fused_ordering(775) 00:15:26.911 fused_ordering(776) 00:15:26.911 fused_ordering(777) 00:15:26.911 fused_ordering(778) 00:15:26.911 fused_ordering(779) 00:15:26.911 fused_ordering(780) 00:15:26.911 fused_ordering(781) 00:15:26.911 fused_ordering(782) 00:15:26.911 fused_ordering(783) 00:15:26.911 fused_ordering(784) 00:15:26.911 fused_ordering(785) 00:15:26.911 fused_ordering(786) 00:15:26.911 fused_ordering(787) 00:15:26.911 fused_ordering(788) 00:15:26.911 fused_ordering(789) 00:15:26.911 fused_ordering(790) 00:15:26.911 fused_ordering(791) 00:15:26.911 fused_ordering(792) 00:15:26.911 fused_ordering(793) 00:15:26.911 fused_ordering(794) 00:15:26.911 fused_ordering(795) 00:15:26.911 fused_ordering(796) 00:15:26.911 fused_ordering(797) 00:15:26.911 fused_ordering(798) 00:15:26.911 fused_ordering(799) 00:15:26.911 fused_ordering(800) 00:15:26.911 fused_ordering(801) 00:15:26.911 fused_ordering(802) 00:15:26.911 fused_ordering(803) 00:15:26.911 fused_ordering(804) 00:15:26.911 fused_ordering(805) 00:15:26.911 fused_ordering(806) 00:15:26.911 fused_ordering(807) 00:15:26.911 fused_ordering(808) 00:15:26.911 fused_ordering(809) 00:15:26.911 fused_ordering(810) 00:15:26.911 fused_ordering(811) 00:15:26.911 fused_ordering(812) 00:15:26.911 fused_ordering(813) 00:15:26.911 fused_ordering(814) 00:15:26.911 fused_ordering(815) 00:15:26.911 fused_ordering(816) 00:15:26.911 fused_ordering(817) 00:15:26.911 fused_ordering(818) 00:15:26.911 fused_ordering(819) 00:15:26.911 fused_ordering(820) 00:15:27.173 fused_ordering(821) 00:15:27.173 fused_ordering(822) 00:15:27.173 fused_ordering(823) 00:15:27.173 fused_ordering(824) 00:15:27.173 fused_ordering(825) 00:15:27.173 fused_ordering(826) 00:15:27.173 fused_ordering(827) 00:15:27.173 fused_ordering(828) 00:15:27.173 fused_ordering(829) 00:15:27.173 fused_ordering(830) 00:15:27.173 fused_ordering(831) 00:15:27.173 fused_ordering(832) 00:15:27.173 fused_ordering(833) 00:15:27.173 fused_ordering(834) 00:15:27.173 fused_ordering(835) 00:15:27.173 fused_ordering(836) 00:15:27.173 fused_ordering(837) 00:15:27.173 fused_ordering(838) 00:15:27.173 fused_ordering(839) 00:15:27.173 fused_ordering(840) 00:15:27.173 fused_ordering(841) 00:15:27.173 fused_ordering(842) 00:15:27.173 fused_ordering(843) 00:15:27.173 fused_ordering(844) 00:15:27.173 fused_ordering(845) 00:15:27.173 fused_ordering(846) 00:15:27.173 fused_ordering(847) 00:15:27.173 fused_ordering(848) 00:15:27.173 fused_ordering(849) 00:15:27.173 fused_ordering(850) 00:15:27.173 fused_ordering(851) 00:15:27.173 fused_ordering(852) 00:15:27.173 fused_ordering(853) 00:15:27.173 fused_ordering(854) 00:15:27.173 fused_ordering(855) 00:15:27.173 fused_ordering(856) 00:15:27.173 fused_ordering(857) 00:15:27.173 fused_ordering(858) 00:15:27.173 fused_ordering(859) 00:15:27.173 fused_ordering(860) 00:15:27.173 fused_ordering(861) 00:15:27.173 fused_ordering(862) 00:15:27.173 fused_ordering(863) 00:15:27.173 fused_ordering(864) 00:15:27.173 fused_ordering(865) 00:15:27.173 fused_ordering(866) 00:15:27.173 fused_ordering(867) 00:15:27.173 fused_ordering(868) 00:15:27.173 fused_ordering(869) 00:15:27.173 fused_ordering(870) 00:15:27.173 fused_ordering(871) 00:15:27.173 fused_ordering(872) 00:15:27.173 fused_ordering(873) 00:15:27.173 fused_ordering(874) 00:15:27.173 fused_ordering(875) 00:15:27.173 fused_ordering(876) 00:15:27.173 fused_ordering(877) 00:15:27.173 fused_ordering(878) 00:15:27.173 fused_ordering(879) 00:15:27.173 fused_ordering(880) 00:15:27.173 fused_ordering(881) 00:15:27.173 fused_ordering(882) 00:15:27.173 fused_ordering(883) 00:15:27.173 fused_ordering(884) 00:15:27.173 fused_ordering(885) 00:15:27.173 fused_ordering(886) 00:15:27.173 fused_ordering(887) 00:15:27.173 fused_ordering(888) 00:15:27.173 fused_ordering(889) 00:15:27.173 fused_ordering(890) 00:15:27.173 fused_ordering(891) 00:15:27.173 fused_ordering(892) 00:15:27.173 fused_ordering(893) 00:15:27.173 fused_ordering(894) 00:15:27.173 fused_ordering(895) 00:15:27.173 fused_ordering(896) 00:15:27.173 fused_ordering(897) 00:15:27.173 fused_ordering(898) 00:15:27.173 fused_ordering(899) 00:15:27.173 fused_ordering(900) 00:15:27.173 fused_ordering(901) 00:15:27.173 fused_ordering(902) 00:15:27.173 fused_ordering(903) 00:15:27.173 fused_ordering(904) 00:15:27.173 fused_ordering(905) 00:15:27.173 fused_ordering(906) 00:15:27.173 fused_ordering(907) 00:15:27.173 fused_ordering(908) 00:15:27.173 fused_ordering(909) 00:15:27.173 fused_ordering(910) 00:15:27.173 fused_ordering(911) 00:15:27.173 fused_ordering(912) 00:15:27.173 fused_ordering(913) 00:15:27.173 fused_ordering(914) 00:15:27.173 fused_ordering(915) 00:15:27.173 fused_ordering(916) 00:15:27.173 fused_ordering(917) 00:15:27.173 fused_ordering(918) 00:15:27.173 fused_ordering(919) 00:15:27.173 fused_ordering(920) 00:15:27.173 fused_ordering(921) 00:15:27.173 fused_ordering(922) 00:15:27.173 fused_ordering(923) 00:15:27.173 fused_ordering(924) 00:15:27.173 fused_ordering(925) 00:15:27.173 fused_ordering(926) 00:15:27.173 fused_ordering(927) 00:15:27.173 fused_ordering(928) 00:15:27.173 fused_ordering(929) 00:15:27.173 fused_ordering(930) 00:15:27.173 fused_ordering(931) 00:15:27.173 fused_ordering(932) 00:15:27.173 fused_ordering(933) 00:15:27.173 fused_ordering(934) 00:15:27.173 fused_ordering(935) 00:15:27.173 fused_ordering(936) 00:15:27.173 fused_ordering(937) 00:15:27.173 fused_ordering(938) 00:15:27.173 fused_ordering(939) 00:15:27.173 fused_ordering(940) 00:15:27.173 fused_ordering(941) 00:15:27.173 fused_ordering(942) 00:15:27.173 fused_ordering(943) 00:15:27.173 fused_ordering(944) 00:15:27.173 fused_ordering(945) 00:15:27.173 fused_ordering(946) 00:15:27.173 fused_ordering(947) 00:15:27.173 fused_ordering(948) 00:15:27.173 fused_ordering(949) 00:15:27.173 fused_ordering(950) 00:15:27.173 fused_ordering(951) 00:15:27.173 fused_ordering(952) 00:15:27.173 fused_ordering(953) 00:15:27.173 fused_ordering(954) 00:15:27.173 fused_ordering(955) 00:15:27.173 fused_ordering(956) 00:15:27.173 fused_ordering(957) 00:15:27.173 fused_ordering(958) 00:15:27.173 fused_ordering(959) 00:15:27.173 fused_ordering(960) 00:15:27.173 fused_ordering(961) 00:15:27.173 fused_ordering(962) 00:15:27.173 fused_ordering(963) 00:15:27.173 fused_ordering(964) 00:15:27.173 fused_ordering(965) 00:15:27.173 fused_ordering(966) 00:15:27.173 fused_ordering(967) 00:15:27.173 fused_ordering(968) 00:15:27.173 fused_ordering(969) 00:15:27.173 fused_ordering(970) 00:15:27.173 fused_ordering(971) 00:15:27.173 fused_ordering(972) 00:15:27.173 fused_ordering(973) 00:15:27.173 fused_ordering(974) 00:15:27.173 fused_ordering(975) 00:15:27.173 fused_ordering(976) 00:15:27.173 fused_ordering(977) 00:15:27.173 fused_ordering(978) 00:15:27.173 fused_ordering(979) 00:15:27.173 fused_ordering(980) 00:15:27.173 fused_ordering(981) 00:15:27.173 fused_ordering(982) 00:15:27.173 fused_ordering(983) 00:15:27.173 fused_ordering(984) 00:15:27.173 fused_ordering(985) 00:15:27.173 fused_ordering(986) 00:15:27.173 fused_ordering(987) 00:15:27.173 fused_ordering(988) 00:15:27.173 fused_ordering(989) 00:15:27.173 fused_ordering(990) 00:15:27.173 fused_ordering(991) 00:15:27.173 fused_ordering(992) 00:15:27.173 fused_ordering(993) 00:15:27.173 fused_ordering(994) 00:15:27.173 fused_ordering(995) 00:15:27.173 fused_ordering(996) 00:15:27.173 fused_ordering(997) 00:15:27.173 fused_ordering(998) 00:15:27.173 fused_ordering(999) 00:15:27.173 fused_ordering(1000) 00:15:27.173 fused_ordering(1001) 00:15:27.173 fused_ordering(1002) 00:15:27.173 fused_ordering(1003) 00:15:27.173 fused_ordering(1004) 00:15:27.173 fused_ordering(1005) 00:15:27.173 fused_ordering(1006) 00:15:27.173 fused_ordering(1007) 00:15:27.173 fused_ordering(1008) 00:15:27.173 fused_ordering(1009) 00:15:27.173 fused_ordering(1010) 00:15:27.173 fused_ordering(1011) 00:15:27.173 fused_ordering(1012) 00:15:27.173 fused_ordering(1013) 00:15:27.173 fused_ordering(1014) 00:15:27.173 fused_ordering(1015) 00:15:27.173 fused_ordering(1016) 00:15:27.173 fused_ordering(1017) 00:15:27.173 fused_ordering(1018) 00:15:27.173 fused_ordering(1019) 00:15:27.173 fused_ordering(1020) 00:15:27.173 fused_ordering(1021) 00:15:27.173 fused_ordering(1022) 00:15:27.173 fused_ordering(1023) 00:15:27.173 21:20:42 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:27.173 21:20:42 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:27.173 21:20:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:27.173 21:20:42 -- nvmf/common.sh@117 -- # sync 00:15:27.173 21:20:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.173 21:20:42 -- nvmf/common.sh@120 -- # set +e 00:15:27.173 21:20:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.174 21:20:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.174 rmmod nvme_tcp 00:15:27.174 rmmod nvme_fabrics 00:15:27.174 rmmod nvme_keyring 00:15:27.174 21:20:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.174 21:20:42 -- nvmf/common.sh@124 -- # set -e 00:15:27.174 21:20:42 -- nvmf/common.sh@125 -- # return 0 00:15:27.174 21:20:42 -- nvmf/common.sh@478 -- # '[' -n 1160738 ']' 00:15:27.174 21:20:42 -- nvmf/common.sh@479 -- # killprocess 1160738 00:15:27.174 21:20:42 -- common/autotest_common.sh@936 -- # '[' -z 1160738 ']' 00:15:27.174 21:20:42 -- common/autotest_common.sh@940 -- # kill -0 1160738 00:15:27.174 21:20:42 -- common/autotest_common.sh@941 -- # uname 00:15:27.174 21:20:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:27.174 21:20:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1160738 00:15:27.174 21:20:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:27.174 21:20:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:27.174 21:20:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1160738' 00:15:27.174 killing process with pid 1160738 00:15:27.174 21:20:42 -- common/autotest_common.sh@955 -- # kill 1160738 00:15:27.174 21:20:42 -- common/autotest_common.sh@960 -- # wait 1160738 00:15:27.744 21:20:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:27.744 21:20:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:27.744 21:20:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:27.744 21:20:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.744 21:20:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.744 21:20:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.744 21:20:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.744 21:20:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.286 21:20:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:30.286 00:15:30.286 real 0m10.776s 00:15:30.286 user 0m5.799s 00:15:30.286 sys 0m4.968s 00:15:30.286 21:20:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:30.286 21:20:44 -- common/autotest_common.sh@10 -- # set +x 00:15:30.286 ************************************ 00:15:30.286 END TEST nvmf_fused_ordering 00:15:30.286 ************************************ 00:15:30.286 21:20:44 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:30.286 21:20:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:30.286 21:20:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:30.286 21:20:44 -- common/autotest_common.sh@10 -- # set +x 00:15:30.286 ************************************ 00:15:30.286 START TEST nvmf_delete_subsystem 00:15:30.286 ************************************ 00:15:30.286 21:20:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:30.286 * Looking for test storage... 00:15:30.286 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:30.286 21:20:44 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.286 21:20:44 -- nvmf/common.sh@7 -- # uname -s 00:15:30.286 21:20:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.286 21:20:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.286 21:20:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.286 21:20:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.286 21:20:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.286 21:20:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.286 21:20:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.286 21:20:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.286 21:20:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.286 21:20:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.286 21:20:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:15:30.286 21:20:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:15:30.286 21:20:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.286 21:20:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.286 21:20:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:30.286 21:20:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.287 21:20:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:30.287 21:20:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.287 21:20:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.287 21:20:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.287 21:20:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.287 21:20:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.287 21:20:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.287 21:20:44 -- paths/export.sh@5 -- # export PATH 00:15:30.287 21:20:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.287 21:20:44 -- nvmf/common.sh@47 -- # : 0 00:15:30.287 21:20:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.287 21:20:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.287 21:20:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.287 21:20:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.287 21:20:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.287 21:20:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.287 21:20:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.287 21:20:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.287 21:20:44 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:30.287 21:20:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:30.287 21:20:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.287 21:20:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:30.287 21:20:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:30.287 21:20:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:30.287 21:20:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.287 21:20:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.287 21:20:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.287 21:20:44 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:15:30.287 21:20:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:30.287 21:20:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:30.287 21:20:44 -- common/autotest_common.sh@10 -- # set +x 00:15:35.572 21:20:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:35.572 21:20:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.572 21:20:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.572 21:20:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.572 21:20:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.572 21:20:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.572 21:20:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.572 21:20:50 -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.572 21:20:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.572 21:20:50 -- nvmf/common.sh@296 -- # e810=() 00:15:35.572 21:20:50 -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.572 21:20:50 -- nvmf/common.sh@297 -- # x722=() 00:15:35.572 21:20:50 -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.572 21:20:50 -- nvmf/common.sh@298 -- # mlx=() 00:15:35.572 21:20:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.572 21:20:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.572 21:20:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.572 21:20:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.572 21:20:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.572 21:20:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.572 21:20:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.572 21:20:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.572 21:20:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.572 21:20:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.572 21:20:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.572 21:20:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.572 21:20:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.572 21:20:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.572 21:20:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.572 21:20:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:35.572 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:35.572 21:20:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.572 21:20:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:35.572 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:35.572 21:20:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.572 21:20:50 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:35.572 21:20:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.572 21:20:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.572 21:20:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:35.572 21:20:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.572 21:20:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:35.572 Found net devices under 0000:27:00.0: cvl_0_0 00:15:35.572 21:20:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.573 21:20:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.573 21:20:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.573 21:20:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:35.573 21:20:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.573 21:20:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:35.573 Found net devices under 0000:27:00.1: cvl_0_1 00:15:35.573 21:20:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.573 21:20:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:35.573 21:20:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:35.573 21:20:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:35.573 21:20:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:35.573 21:20:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:35.573 21:20:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.573 21:20:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.573 21:20:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.573 21:20:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:35.573 21:20:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.573 21:20:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.573 21:20:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:35.573 21:20:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.573 21:20:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.573 21:20:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:35.573 21:20:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:35.573 21:20:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.573 21:20:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.573 21:20:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.573 21:20:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.573 21:20:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:35.573 21:20:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.573 21:20:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.573 21:20:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.573 21:20:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:35.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:15:35.573 00:15:35.573 --- 10.0.0.2 ping statistics --- 00:15:35.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.573 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:15:35.573 21:20:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:15:35.573 00:15:35.573 --- 10.0.0.1 ping statistics --- 00:15:35.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.573 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:15:35.573 21:20:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.573 21:20:50 -- nvmf/common.sh@411 -- # return 0 00:15:35.573 21:20:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:35.573 21:20:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.573 21:20:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:35.573 21:20:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:35.573 21:20:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.573 21:20:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:35.573 21:20:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:35.573 21:20:50 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:35.573 21:20:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:35.573 21:20:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:35.573 21:20:50 -- common/autotest_common.sh@10 -- # set +x 00:15:35.573 21:20:50 -- nvmf/common.sh@470 -- # nvmfpid=1165235 00:15:35.573 21:20:50 -- nvmf/common.sh@471 -- # waitforlisten 1165235 00:15:35.573 21:20:50 -- common/autotest_common.sh@817 -- # '[' -z 1165235 ']' 00:15:35.573 21:20:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.573 21:20:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:35.573 21:20:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:35.573 21:20:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.573 21:20:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:35.573 21:20:50 -- common/autotest_common.sh@10 -- # set +x 00:15:35.573 [2024-04-24 21:20:50.529739] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:15:35.573 [2024-04-24 21:20:50.529842] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.834 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.834 [2024-04-24 21:20:50.653728] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:35.834 [2024-04-24 21:20:50.747422] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.834 [2024-04-24 21:20:50.747455] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.834 [2024-04-24 21:20:50.747468] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.834 [2024-04-24 21:20:50.747477] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.834 [2024-04-24 21:20:50.747485] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.834 [2024-04-24 21:20:50.747553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.834 [2024-04-24 21:20:50.747561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.405 21:20:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:36.405 21:20:51 -- common/autotest_common.sh@850 -- # return 0 00:15:36.405 21:20:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:36.405 21:20:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:36.405 21:20:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.405 21:20:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.405 21:20:51 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.405 21:20:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.405 21:20:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.405 [2024-04-24 21:20:51.264050] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.405 21:20:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.405 21:20:51 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:36.405 21:20:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.405 21:20:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.405 21:20:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.405 21:20:51 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.405 21:20:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.405 21:20:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.405 [2024-04-24 21:20:51.280221] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.405 21:20:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.405 21:20:51 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:36.405 21:20:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.405 21:20:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.405 NULL1 00:15:36.405 21:20:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.405 21:20:51 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:36.405 21:20:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.405 21:20:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.405 Delay0 00:15:36.405 21:20:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.405 21:20:51 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:36.405 21:20:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.405 21:20:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.405 21:20:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.405 21:20:51 -- target/delete_subsystem.sh@28 -- # perf_pid=1165288 00:15:36.405 21:20:51 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:36.405 21:20:51 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:36.664 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.664 [2024-04-24 21:20:51.395048] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:38.576 21:20:53 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.576 21:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.576 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 [2024-04-24 21:20:53.542539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002440 is same with the state(5) to be set 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 starting I/O failed: -6 00:15:38.836 [2024-04-24 21:20:53.544265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010040 is same with the state(5) to be set 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Write completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.836 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Write completed with error (sct=0, sc=8) 00:15:38.837 Write completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Write completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Write completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Write completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Write completed with error (sct=0, sc=8) 00:15:38.837 Write completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Write completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Write completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Write completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Write completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Read completed with error (sct=0, sc=8) 00:15:38.837 Write completed with error (sct=0, sc=8) 00:15:39.777 [2024-04-24 21:20:54.495337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002240 is same with the state(5) to be set 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 [2024-04-24 21:20:54.543462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002640 is same with the state(5) to be set 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 [2024-04-24 21:20:54.543778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010240 is same with the state(5) to be set 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 [2024-04-24 21:20:54.543925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010640 is same with the state(5) to be set 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Write completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 Read completed with error (sct=0, sc=8) 00:15:39.777 [2024-04-24 21:20:54.544503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002a40 is same with the state(5) to be set 00:15:39.777 [2024-04-24 21:20:54.546488] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000002240 (9): Bad file descriptor 00:15:39.777 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:39.777 21:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.777 21:20:54 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:39.777 21:20:54 -- target/delete_subsystem.sh@35 -- # kill -0 1165288 00:15:39.777 21:20:54 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:39.777 Initializing NVMe Controllers 00:15:39.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:39.777 Controller IO queue size 128, less than required. 00:15:39.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:39.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:39.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:39.777 Initialization complete. Launching workers. 00:15:39.777 ======================================================== 00:15:39.777 Latency(us) 00:15:39.777 Device Information : IOPS MiB/s Average min max 00:15:39.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.41 0.08 891935.34 542.15 1011179.99 00:15:39.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.98 0.08 918794.79 404.33 1012004.45 00:15:39.777 ======================================================== 00:15:39.777 Total : 331.39 0.16 904901.97 404.33 1012004.45 00:15:39.777 00:15:40.372 21:20:55 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:40.372 21:20:55 -- target/delete_subsystem.sh@35 -- # kill -0 1165288 00:15:40.372 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1165288) - No such process 00:15:40.372 21:20:55 -- target/delete_subsystem.sh@45 -- # NOT wait 1165288 00:15:40.372 21:20:55 -- common/autotest_common.sh@638 -- # local es=0 00:15:40.372 21:20:55 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 1165288 00:15:40.372 21:20:55 -- common/autotest_common.sh@626 -- # local arg=wait 00:15:40.372 21:20:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:40.372 21:20:55 -- common/autotest_common.sh@630 -- # type -t wait 00:15:40.372 21:20:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:40.372 21:20:55 -- common/autotest_common.sh@641 -- # wait 1165288 00:15:40.372 21:20:55 -- common/autotest_common.sh@641 -- # es=1 00:15:40.372 21:20:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:40.372 21:20:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:40.372 21:20:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:40.372 21:20:55 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:40.372 21:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.372 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:15:40.372 21:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.372 21:20:55 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.372 21:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.372 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:15:40.372 [2024-04-24 21:20:55.072851] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.372 21:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.372 21:20:55 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:40.372 21:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.372 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:15:40.372 21:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.372 21:20:55 -- target/delete_subsystem.sh@54 -- # perf_pid=1166051 00:15:40.372 21:20:55 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:40.373 21:20:55 -- target/delete_subsystem.sh@57 -- # kill -0 1166051 00:15:40.373 21:20:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:40.373 21:20:55 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:40.373 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.373 [2024-04-24 21:20:55.182749] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:40.633 21:20:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:40.633 21:20:55 -- target/delete_subsystem.sh@57 -- # kill -0 1166051 00:15:40.633 21:20:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:41.203 21:20:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:41.203 21:20:56 -- target/delete_subsystem.sh@57 -- # kill -0 1166051 00:15:41.203 21:20:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:41.769 21:20:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:41.769 21:20:56 -- target/delete_subsystem.sh@57 -- # kill -0 1166051 00:15:41.769 21:20:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:42.335 21:20:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:42.335 21:20:57 -- target/delete_subsystem.sh@57 -- # kill -0 1166051 00:15:42.335 21:20:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:42.904 21:20:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:42.904 21:20:57 -- target/delete_subsystem.sh@57 -- # kill -0 1166051 00:15:42.904 21:20:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:43.164 21:20:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:43.164 21:20:58 -- target/delete_subsystem.sh@57 -- # kill -0 1166051 00:15:43.164 21:20:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:43.421 Initializing NVMe Controllers 00:15:43.421 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:43.421 Controller IO queue size 128, less than required. 00:15:43.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:43.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:43.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:43.421 Initialization complete. Launching workers. 00:15:43.421 ======================================================== 00:15:43.421 Latency(us) 00:15:43.421 Device Information : IOPS MiB/s Average min max 00:15:43.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003040.58 1000219.69 1042930.92 00:15:43.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004767.34 1000295.93 1012031.36 00:15:43.421 ======================================================== 00:15:43.421 Total : 256.00 0.12 1003903.96 1000219.69 1042930.92 00:15:43.421 00:15:43.679 21:20:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:43.679 21:20:58 -- target/delete_subsystem.sh@57 -- # kill -0 1166051 00:15:43.679 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1166051) - No such process 00:15:43.679 21:20:58 -- target/delete_subsystem.sh@67 -- # wait 1166051 00:15:43.679 21:20:58 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:43.679 21:20:58 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:43.679 21:20:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:43.679 21:20:58 -- nvmf/common.sh@117 -- # sync 00:15:43.679 21:20:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.679 21:20:58 -- nvmf/common.sh@120 -- # set +e 00:15:43.679 21:20:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.679 21:20:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.679 rmmod nvme_tcp 00:15:43.938 rmmod nvme_fabrics 00:15:43.938 rmmod nvme_keyring 00:15:43.938 21:20:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.938 21:20:58 -- nvmf/common.sh@124 -- # set -e 00:15:43.938 21:20:58 -- nvmf/common.sh@125 -- # return 0 00:15:43.938 21:20:58 -- nvmf/common.sh@478 -- # '[' -n 1165235 ']' 00:15:43.938 21:20:58 -- nvmf/common.sh@479 -- # killprocess 1165235 00:15:43.938 21:20:58 -- common/autotest_common.sh@936 -- # '[' -z 1165235 ']' 00:15:43.938 21:20:58 -- common/autotest_common.sh@940 -- # kill -0 1165235 00:15:43.938 21:20:58 -- common/autotest_common.sh@941 -- # uname 00:15:43.938 21:20:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:43.938 21:20:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1165235 00:15:43.938 21:20:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:43.938 21:20:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:43.938 21:20:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1165235' 00:15:43.938 killing process with pid 1165235 00:15:43.938 21:20:58 -- common/autotest_common.sh@955 -- # kill 1165235 00:15:43.938 21:20:58 -- common/autotest_common.sh@960 -- # wait 1165235 00:15:44.508 21:20:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:44.508 21:20:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:44.508 21:20:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:44.508 21:20:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.508 21:20:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.508 21:20:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.508 21:20:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.509 21:20:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.415 21:21:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:46.415 00:15:46.415 real 0m16.435s 00:15:46.415 user 0m30.475s 00:15:46.415 sys 0m4.969s 00:15:46.415 21:21:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:46.415 21:21:01 -- common/autotest_common.sh@10 -- # set +x 00:15:46.415 ************************************ 00:15:46.415 END TEST nvmf_delete_subsystem 00:15:46.415 ************************************ 00:15:46.415 21:21:01 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:46.415 21:21:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:46.415 21:21:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:46.415 21:21:01 -- common/autotest_common.sh@10 -- # set +x 00:15:46.415 ************************************ 00:15:46.415 START TEST nvmf_ns_masking 00:15:46.415 ************************************ 00:15:46.415 21:21:01 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:46.674 * Looking for test storage... 00:15:46.674 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:46.674 21:21:01 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.674 21:21:01 -- nvmf/common.sh@7 -- # uname -s 00:15:46.674 21:21:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.674 21:21:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.674 21:21:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.674 21:21:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.674 21:21:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.674 21:21:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.674 21:21:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.674 21:21:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.674 21:21:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.674 21:21:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.674 21:21:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:15:46.674 21:21:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:15:46.674 21:21:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.674 21:21:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.674 21:21:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:46.674 21:21:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.674 21:21:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:46.674 21:21:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.674 21:21:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.674 21:21:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.674 21:21:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.674 21:21:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.674 21:21:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.674 21:21:01 -- paths/export.sh@5 -- # export PATH 00:15:46.674 21:21:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.674 21:21:01 -- nvmf/common.sh@47 -- # : 0 00:15:46.674 21:21:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.674 21:21:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.674 21:21:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.674 21:21:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.674 21:21:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.674 21:21:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.674 21:21:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.674 21:21:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.674 21:21:01 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:46.674 21:21:01 -- target/ns_masking.sh@11 -- # loops=5 00:15:46.674 21:21:01 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:46.674 21:21:01 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:46.674 21:21:01 -- target/ns_masking.sh@15 -- # uuidgen 00:15:46.674 21:21:01 -- target/ns_masking.sh@15 -- # HOSTID=4285326f-873f-47e3-a758-20c307792c59 00:15:46.674 21:21:01 -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:46.674 21:21:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:46.674 21:21:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.674 21:21:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:46.674 21:21:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:46.674 21:21:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:46.674 21:21:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.674 21:21:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.674 21:21:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.674 21:21:01 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:15:46.674 21:21:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:46.674 21:21:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:46.674 21:21:01 -- common/autotest_common.sh@10 -- # set +x 00:15:51.946 21:21:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:51.946 21:21:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:51.946 21:21:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:51.946 21:21:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:51.947 21:21:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:51.947 21:21:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:51.947 21:21:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:51.947 21:21:06 -- nvmf/common.sh@295 -- # net_devs=() 00:15:51.947 21:21:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:51.947 21:21:06 -- nvmf/common.sh@296 -- # e810=() 00:15:51.947 21:21:06 -- nvmf/common.sh@296 -- # local -ga e810 00:15:51.947 21:21:06 -- nvmf/common.sh@297 -- # x722=() 00:15:51.947 21:21:06 -- nvmf/common.sh@297 -- # local -ga x722 00:15:51.947 21:21:06 -- nvmf/common.sh@298 -- # mlx=() 00:15:51.947 21:21:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:51.947 21:21:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.947 21:21:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.947 21:21:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.947 21:21:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.947 21:21:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.947 21:21:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.947 21:21:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.947 21:21:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.947 21:21:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.947 21:21:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.947 21:21:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.947 21:21:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:51.947 21:21:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:51.947 21:21:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.947 21:21:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:51.947 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:51.947 21:21:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.947 21:21:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:51.947 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:51.947 21:21:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:51.947 21:21:06 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.947 21:21:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.947 21:21:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:51.947 21:21:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.947 21:21:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:51.947 Found net devices under 0000:27:00.0: cvl_0_0 00:15:51.947 21:21:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.947 21:21:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.947 21:21:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.947 21:21:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:51.947 21:21:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.947 21:21:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:51.947 Found net devices under 0000:27:00.1: cvl_0_1 00:15:51.947 21:21:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.947 21:21:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:51.947 21:21:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:51.947 21:21:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:51.947 21:21:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.947 21:21:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.947 21:21:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.947 21:21:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:51.947 21:21:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.947 21:21:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.947 21:21:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:51.947 21:21:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.947 21:21:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.947 21:21:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:51.947 21:21:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:51.947 21:21:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.947 21:21:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.947 21:21:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.947 21:21:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.947 21:21:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:51.947 21:21:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.947 21:21:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.947 21:21:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.947 21:21:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:51.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:15:51.947 00:15:51.947 --- 10.0.0.2 ping statistics --- 00:15:51.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.947 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:15:51.947 21:21:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:15:51.947 00:15:51.947 --- 10.0.0.1 ping statistics --- 00:15:51.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.947 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:15:51.947 21:21:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.947 21:21:06 -- nvmf/common.sh@411 -- # return 0 00:15:51.947 21:21:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:51.947 21:21:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.947 21:21:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:51.947 21:21:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.947 21:21:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:51.947 21:21:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:51.947 21:21:06 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:51.947 21:21:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:51.947 21:21:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:51.947 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:15:51.948 21:21:06 -- nvmf/common.sh@470 -- # nvmfpid=1171218 00:15:51.948 21:21:06 -- nvmf/common.sh@471 -- # waitforlisten 1171218 00:15:51.948 21:21:06 -- common/autotest_common.sh@817 -- # '[' -z 1171218 ']' 00:15:51.948 21:21:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.948 21:21:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:51.948 21:21:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.948 21:21:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:51.948 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:15:51.948 21:21:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:51.948 [2024-04-24 21:21:06.837925] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:15:51.948 [2024-04-24 21:21:06.838030] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.207 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.207 [2024-04-24 21:21:06.955857] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.207 [2024-04-24 21:21:07.054516] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.207 [2024-04-24 21:21:07.054552] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.207 [2024-04-24 21:21:07.054563] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.207 [2024-04-24 21:21:07.054572] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.207 [2024-04-24 21:21:07.054579] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.207 [2024-04-24 21:21:07.054731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.207 [2024-04-24 21:21:07.054830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.207 [2024-04-24 21:21:07.054931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.207 [2024-04-24 21:21:07.054943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.779 21:21:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:52.779 21:21:07 -- common/autotest_common.sh@850 -- # return 0 00:15:52.779 21:21:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:52.779 21:21:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:52.779 21:21:07 -- common/autotest_common.sh@10 -- # set +x 00:15:52.779 21:21:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.779 21:21:07 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:52.779 [2024-04-24 21:21:07.724358] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.039 21:21:07 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:53.039 21:21:07 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:53.039 21:21:07 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:53.039 Malloc1 00:15:53.039 21:21:07 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:53.363 Malloc2 00:15:53.363 21:21:08 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:53.363 21:21:08 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:53.640 21:21:08 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.640 [2024-04-24 21:21:08.501444] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.640 21:21:08 -- target/ns_masking.sh@61 -- # connect 00:15:53.640 21:21:08 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4285326f-873f-47e3-a758-20c307792c59 -a 10.0.0.2 -s 4420 -i 4 00:15:53.898 21:21:08 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:53.898 21:21:08 -- common/autotest_common.sh@1184 -- # local i=0 00:15:53.898 21:21:08 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.898 21:21:08 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:53.898 21:21:08 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:55.809 21:21:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:55.809 21:21:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:55.809 21:21:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:55.809 21:21:10 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:55.809 21:21:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.809 21:21:10 -- common/autotest_common.sh@1194 -- # return 0 00:15:55.809 21:21:10 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:55.809 21:21:10 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:56.071 21:21:10 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:56.071 21:21:10 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:56.071 21:21:10 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:56.071 21:21:10 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:56.071 21:21:10 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:56.071 [ 0]:0x1 00:15:56.071 21:21:10 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:56.071 21:21:10 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:56.071 21:21:10 -- target/ns_masking.sh@40 -- # nguid=68a521595e15410e889b7b07f5903abb 00:15:56.071 21:21:10 -- target/ns_masking.sh@41 -- # [[ 68a521595e15410e889b7b07f5903abb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.071 21:21:10 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:56.071 21:21:10 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:56.071 21:21:10 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:56.071 21:21:10 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:56.071 [ 0]:0x1 00:15:56.071 21:21:10 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:56.071 21:21:10 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:56.071 21:21:11 -- target/ns_masking.sh@40 -- # nguid=68a521595e15410e889b7b07f5903abb 00:15:56.071 21:21:11 -- target/ns_masking.sh@41 -- # [[ 68a521595e15410e889b7b07f5903abb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.071 21:21:11 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:56.071 21:21:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:56.071 21:21:11 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:56.071 [ 1]:0x2 00:15:56.071 21:21:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:56.333 21:21:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:56.333 21:21:11 -- target/ns_masking.sh@40 -- # nguid=fdeee520d1584aab85472bf019246948 00:15:56.333 21:21:11 -- target/ns_masking.sh@41 -- # [[ fdeee520d1584aab85472bf019246948 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:56.333 21:21:11 -- target/ns_masking.sh@69 -- # disconnect 00:15:56.333 21:21:11 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:56.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.592 21:21:11 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.592 21:21:11 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:56.850 21:21:11 -- target/ns_masking.sh@77 -- # connect 1 00:15:56.850 21:21:11 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4285326f-873f-47e3-a758-20c307792c59 -a 10.0.0.2 -s 4420 -i 4 00:15:57.108 21:21:11 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:57.108 21:21:11 -- common/autotest_common.sh@1184 -- # local i=0 00:15:57.108 21:21:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.108 21:21:11 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:15:57.108 21:21:11 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:15:57.108 21:21:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:59.014 21:21:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:59.014 21:21:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:59.014 21:21:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:59.014 21:21:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:59.014 21:21:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.014 21:21:13 -- common/autotest_common.sh@1194 -- # return 0 00:15:59.014 21:21:13 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:59.014 21:21:13 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:59.014 21:21:13 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:59.014 21:21:13 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:59.014 21:21:13 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:59.014 21:21:13 -- common/autotest_common.sh@638 -- # local es=0 00:15:59.014 21:21:13 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:59.014 21:21:13 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:59.014 21:21:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:59.014 21:21:13 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:59.014 21:21:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:59.014 21:21:13 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:59.014 21:21:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:59.014 21:21:13 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:59.275 21:21:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:59.275 21:21:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:59.275 21:21:14 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:59.275 21:21:14 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.275 21:21:14 -- common/autotest_common.sh@641 -- # es=1 00:15:59.275 21:21:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:59.275 21:21:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:59.275 21:21:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:59.275 21:21:14 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:59.275 21:21:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:59.275 21:21:14 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:59.275 [ 0]:0x2 00:15:59.275 21:21:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:59.275 21:21:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:59.275 21:21:14 -- target/ns_masking.sh@40 -- # nguid=fdeee520d1584aab85472bf019246948 00:15:59.275 21:21:14 -- target/ns_masking.sh@41 -- # [[ fdeee520d1584aab85472bf019246948 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.275 21:21:14 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:59.536 21:21:14 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:59.536 21:21:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:59.536 21:21:14 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:59.536 [ 0]:0x1 00:15:59.536 21:21:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:59.536 21:21:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:59.536 21:21:14 -- target/ns_masking.sh@40 -- # nguid=68a521595e15410e889b7b07f5903abb 00:15:59.536 21:21:14 -- target/ns_masking.sh@41 -- # [[ 68a521595e15410e889b7b07f5903abb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.536 21:21:14 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:59.537 21:21:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:59.537 21:21:14 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:59.537 [ 1]:0x2 00:15:59.537 21:21:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:59.537 21:21:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:59.537 21:21:14 -- target/ns_masking.sh@40 -- # nguid=fdeee520d1584aab85472bf019246948 00:15:59.537 21:21:14 -- target/ns_masking.sh@41 -- # [[ fdeee520d1584aab85472bf019246948 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.537 21:21:14 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:59.796 21:21:14 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:59.796 21:21:14 -- common/autotest_common.sh@638 -- # local es=0 00:15:59.796 21:21:14 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:59.796 21:21:14 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:59.796 21:21:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:59.796 21:21:14 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:59.796 21:21:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:59.796 21:21:14 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:59.796 21:21:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:59.796 21:21:14 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:59.796 21:21:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:59.796 21:21:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:59.796 21:21:14 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:59.796 21:21:14 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.796 21:21:14 -- common/autotest_common.sh@641 -- # es=1 00:15:59.796 21:21:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:59.796 21:21:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:59.796 21:21:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:59.796 21:21:14 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:59.796 21:21:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:59.796 21:21:14 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:59.796 [ 0]:0x2 00:15:59.796 21:21:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:59.796 21:21:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:59.796 21:21:14 -- target/ns_masking.sh@40 -- # nguid=fdeee520d1584aab85472bf019246948 00:15:59.796 21:21:14 -- target/ns_masking.sh@41 -- # [[ fdeee520d1584aab85472bf019246948 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.796 21:21:14 -- target/ns_masking.sh@91 -- # disconnect 00:15:59.796 21:21:14 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:59.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.796 21:21:14 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:00.054 21:21:14 -- target/ns_masking.sh@95 -- # connect 2 00:16:00.055 21:21:14 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4285326f-873f-47e3-a758-20c307792c59 -a 10.0.0.2 -s 4420 -i 4 00:16:00.313 21:21:15 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:00.313 21:21:15 -- common/autotest_common.sh@1184 -- # local i=0 00:16:00.313 21:21:15 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.313 21:21:15 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:16:00.313 21:21:15 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:16:00.313 21:21:15 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:02.218 21:21:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:02.218 21:21:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:02.218 21:21:17 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.218 21:21:17 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:16:02.219 21:21:17 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.219 21:21:17 -- common/autotest_common.sh@1194 -- # return 0 00:16:02.219 21:21:17 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:02.219 21:21:17 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:02.219 21:21:17 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:02.219 21:21:17 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:02.219 21:21:17 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:16:02.219 21:21:17 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:02.219 21:21:17 -- target/ns_masking.sh@39 -- # grep 0x1 00:16:02.219 [ 0]:0x1 00:16:02.219 21:21:17 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:02.219 21:21:17 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:02.219 21:21:17 -- target/ns_masking.sh@40 -- # nguid=68a521595e15410e889b7b07f5903abb 00:16:02.219 21:21:17 -- target/ns_masking.sh@41 -- # [[ 68a521595e15410e889b7b07f5903abb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.219 21:21:17 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:16:02.219 21:21:17 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:02.219 21:21:17 -- target/ns_masking.sh@39 -- # grep 0x2 00:16:02.219 [ 1]:0x2 00:16:02.219 21:21:17 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:02.219 21:21:17 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:02.219 21:21:17 -- target/ns_masking.sh@40 -- # nguid=fdeee520d1584aab85472bf019246948 00:16:02.219 21:21:17 -- target/ns_masking.sh@41 -- # [[ fdeee520d1584aab85472bf019246948 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.219 21:21:17 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:02.477 21:21:17 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:16:02.477 21:21:17 -- common/autotest_common.sh@638 -- # local es=0 00:16:02.477 21:21:17 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:16:02.477 21:21:17 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:16:02.477 21:21:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:02.477 21:21:17 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:16:02.477 21:21:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:02.477 21:21:17 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:16:02.477 21:21:17 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:02.477 21:21:17 -- target/ns_masking.sh@39 -- # grep 0x1 00:16:02.477 21:21:17 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:02.477 21:21:17 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:02.477 21:21:17 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:02.478 21:21:17 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.478 21:21:17 -- common/autotest_common.sh@641 -- # es=1 00:16:02.478 21:21:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:02.478 21:21:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:02.478 21:21:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:02.478 21:21:17 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:16:02.478 21:21:17 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:02.478 21:21:17 -- target/ns_masking.sh@39 -- # grep 0x2 00:16:02.478 [ 0]:0x2 00:16:02.478 21:21:17 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:02.478 21:21:17 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:02.738 21:21:17 -- target/ns_masking.sh@40 -- # nguid=fdeee520d1584aab85472bf019246948 00:16:02.738 21:21:17 -- target/ns_masking.sh@41 -- # [[ fdeee520d1584aab85472bf019246948 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.738 21:21:17 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:02.738 21:21:17 -- common/autotest_common.sh@638 -- # local es=0 00:16:02.738 21:21:17 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:02.738 21:21:17 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:02.738 21:21:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:02.738 21:21:17 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:02.738 21:21:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:02.738 21:21:17 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:02.738 21:21:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:02.738 21:21:17 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:02.738 21:21:17 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:16:02.738 21:21:17 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:02.738 [2024-04-24 21:21:17.609980] nvmf_rpc.c:1774:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:02.738 request: 00:16:02.738 { 00:16:02.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.738 "nsid": 2, 00:16:02.738 "host": "nqn.2016-06.io.spdk:host1", 00:16:02.738 "method": "nvmf_ns_remove_host", 00:16:02.738 "req_id": 1 00:16:02.738 } 00:16:02.738 Got JSON-RPC error response 00:16:02.738 response: 00:16:02.738 { 00:16:02.738 "code": -32602, 00:16:02.738 "message": "Invalid parameters" 00:16:02.738 } 00:16:02.738 21:21:17 -- common/autotest_common.sh@641 -- # es=1 00:16:02.738 21:21:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:02.738 21:21:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:02.738 21:21:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:02.738 21:21:17 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:16:02.738 21:21:17 -- common/autotest_common.sh@638 -- # local es=0 00:16:02.738 21:21:17 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:16:02.738 21:21:17 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:16:02.738 21:21:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:02.738 21:21:17 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:16:02.738 21:21:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:02.738 21:21:17 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:16:02.738 21:21:17 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:02.738 21:21:17 -- target/ns_masking.sh@39 -- # grep 0x1 00:16:02.738 21:21:17 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:02.738 21:21:17 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:02.738 21:21:17 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:02.738 21:21:17 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.738 21:21:17 -- common/autotest_common.sh@641 -- # es=1 00:16:02.738 21:21:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:02.738 21:21:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:02.738 21:21:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:02.738 21:21:17 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:16:02.738 21:21:17 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:02.997 21:21:17 -- target/ns_masking.sh@39 -- # grep 0x2 00:16:02.997 [ 0]:0x2 00:16:02.997 21:21:17 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:02.997 21:21:17 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:02.997 21:21:17 -- target/ns_masking.sh@40 -- # nguid=fdeee520d1584aab85472bf019246948 00:16:02.997 21:21:17 -- target/ns_masking.sh@41 -- # [[ fdeee520d1584aab85472bf019246948 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.997 21:21:17 -- target/ns_masking.sh@108 -- # disconnect 00:16:02.997 21:21:17 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:02.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.997 21:21:17 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:03.256 21:21:18 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:03.256 21:21:18 -- target/ns_masking.sh@114 -- # nvmftestfini 00:16:03.256 21:21:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:03.256 21:21:18 -- nvmf/common.sh@117 -- # sync 00:16:03.256 21:21:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:03.256 21:21:18 -- nvmf/common.sh@120 -- # set +e 00:16:03.256 21:21:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:03.256 21:21:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:03.256 rmmod nvme_tcp 00:16:03.256 rmmod nvme_fabrics 00:16:03.256 rmmod nvme_keyring 00:16:03.256 21:21:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:03.256 21:21:18 -- nvmf/common.sh@124 -- # set -e 00:16:03.256 21:21:18 -- nvmf/common.sh@125 -- # return 0 00:16:03.256 21:21:18 -- nvmf/common.sh@478 -- # '[' -n 1171218 ']' 00:16:03.256 21:21:18 -- nvmf/common.sh@479 -- # killprocess 1171218 00:16:03.256 21:21:18 -- common/autotest_common.sh@936 -- # '[' -z 1171218 ']' 00:16:03.256 21:21:18 -- common/autotest_common.sh@940 -- # kill -0 1171218 00:16:03.256 21:21:18 -- common/autotest_common.sh@941 -- # uname 00:16:03.256 21:21:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:03.256 21:21:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1171218 00:16:03.256 21:21:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:03.256 21:21:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:03.256 21:21:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1171218' 00:16:03.256 killing process with pid 1171218 00:16:03.256 21:21:18 -- common/autotest_common.sh@955 -- # kill 1171218 00:16:03.256 21:21:18 -- common/autotest_common.sh@960 -- # wait 1171218 00:16:03.825 21:21:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:03.825 21:21:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:03.825 21:21:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:03.825 21:21:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.825 21:21:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:03.825 21:21:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.825 21:21:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.825 21:21:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.392 21:21:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:06.392 00:16:06.392 real 0m19.475s 00:16:06.392 user 0m49.951s 00:16:06.392 sys 0m5.370s 00:16:06.392 21:21:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:06.392 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:16:06.392 ************************************ 00:16:06.392 END TEST nvmf_ns_masking 00:16:06.392 ************************************ 00:16:06.392 21:21:20 -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:16:06.392 21:21:20 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:06.392 21:21:20 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:06.392 21:21:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:06.392 21:21:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:06.392 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:16:06.392 ************************************ 00:16:06.392 START TEST nvmf_host_management 00:16:06.392 ************************************ 00:16:06.392 21:21:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:06.392 * Looking for test storage... 00:16:06.392 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:06.392 21:21:21 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.392 21:21:21 -- nvmf/common.sh@7 -- # uname -s 00:16:06.392 21:21:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.392 21:21:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.392 21:21:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.392 21:21:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.392 21:21:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.392 21:21:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.392 21:21:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.392 21:21:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.392 21:21:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.392 21:21:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.392 21:21:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:16:06.392 21:21:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:16:06.392 21:21:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.392 21:21:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.392 21:21:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:06.392 21:21:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.392 21:21:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:06.392 21:21:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.392 21:21:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.392 21:21:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.392 21:21:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.392 21:21:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.392 21:21:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.392 21:21:21 -- paths/export.sh@5 -- # export PATH 00:16:06.392 21:21:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.392 21:21:21 -- nvmf/common.sh@47 -- # : 0 00:16:06.392 21:21:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:06.392 21:21:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:06.392 21:21:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.392 21:21:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.392 21:21:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.392 21:21:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:06.392 21:21:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:06.392 21:21:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:06.392 21:21:21 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:06.392 21:21:21 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:06.392 21:21:21 -- target/host_management.sh@105 -- # nvmftestinit 00:16:06.392 21:21:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:06.392 21:21:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.392 21:21:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:06.392 21:21:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:06.392 21:21:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:06.392 21:21:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.392 21:21:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.392 21:21:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.392 21:21:21 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:16:06.392 21:21:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:06.392 21:21:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:06.392 21:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:11.679 21:21:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:11.679 21:21:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:11.679 21:21:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:11.679 21:21:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:11.679 21:21:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:11.679 21:21:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:11.679 21:21:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:11.679 21:21:25 -- nvmf/common.sh@295 -- # net_devs=() 00:16:11.679 21:21:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:11.679 21:21:25 -- nvmf/common.sh@296 -- # e810=() 00:16:11.679 21:21:25 -- nvmf/common.sh@296 -- # local -ga e810 00:16:11.679 21:21:25 -- nvmf/common.sh@297 -- # x722=() 00:16:11.679 21:21:25 -- nvmf/common.sh@297 -- # local -ga x722 00:16:11.679 21:21:25 -- nvmf/common.sh@298 -- # mlx=() 00:16:11.679 21:21:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:11.679 21:21:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:11.679 21:21:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:11.679 21:21:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:11.679 21:21:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:11.679 21:21:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:11.679 21:21:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:11.679 21:21:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:11.679 21:21:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:11.679 21:21:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:11.679 21:21:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:11.679 21:21:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:11.679 21:21:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:11.679 21:21:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:11.679 21:21:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:11.679 21:21:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:11.679 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:11.679 21:21:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:11.679 21:21:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:11.679 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:11.679 21:21:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:11.679 21:21:25 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:11.679 21:21:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.679 21:21:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:11.679 21:21:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.679 21:21:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:11.679 Found net devices under 0000:27:00.0: cvl_0_0 00:16:11.679 21:21:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.679 21:21:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:11.679 21:21:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.679 21:21:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:11.679 21:21:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.679 21:21:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:11.679 Found net devices under 0000:27:00.1: cvl_0_1 00:16:11.679 21:21:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.679 21:21:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:11.679 21:21:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:11.679 21:21:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:11.679 21:21:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:11.679 21:21:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.679 21:21:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.679 21:21:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:11.679 21:21:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:11.679 21:21:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:11.679 21:21:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:11.679 21:21:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:11.679 21:21:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:11.679 21:21:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.679 21:21:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:11.679 21:21:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:11.679 21:21:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:11.679 21:21:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:11.679 21:21:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:11.679 21:21:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:11.679 21:21:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:11.679 21:21:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:11.679 21:21:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:11.679 21:21:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:11.679 21:21:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:11.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:16:11.679 00:16:11.679 --- 10.0.0.2 ping statistics --- 00:16:11.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.679 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:16:11.679 21:21:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:11.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:16:11.679 00:16:11.679 --- 10.0.0.1 ping statistics --- 00:16:11.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.679 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:16:11.679 21:21:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.679 21:21:26 -- nvmf/common.sh@411 -- # return 0 00:16:11.679 21:21:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:11.679 21:21:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.679 21:21:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:11.679 21:21:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:11.679 21:21:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.679 21:21:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:11.679 21:21:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:11.679 21:21:26 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:16:11.679 21:21:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:11.679 21:21:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.679 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:16:11.679 ************************************ 00:16:11.679 START TEST nvmf_host_management 00:16:11.679 ************************************ 00:16:11.679 21:21:26 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:16:11.679 21:21:26 -- target/host_management.sh@69 -- # starttarget 00:16:11.679 21:21:26 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:11.679 21:21:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:11.679 21:21:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:11.679 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:16:11.679 21:21:26 -- nvmf/common.sh@470 -- # nvmfpid=1177402 00:16:11.679 21:21:26 -- nvmf/common.sh@471 -- # waitforlisten 1177402 00:16:11.679 21:21:26 -- common/autotest_common.sh@817 -- # '[' -z 1177402 ']' 00:16:11.680 21:21:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:11.680 21:21:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.680 21:21:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:11.680 21:21:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.680 21:21:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:11.680 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:16:11.680 [2024-04-24 21:21:26.214022] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:16:11.680 [2024-04-24 21:21:26.214122] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.680 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.680 [2024-04-24 21:21:26.335650] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.680 [2024-04-24 21:21:26.433955] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.680 [2024-04-24 21:21:26.433991] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.680 [2024-04-24 21:21:26.434003] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.680 [2024-04-24 21:21:26.434012] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.680 [2024-04-24 21:21:26.434020] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.680 [2024-04-24 21:21:26.434167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.680 [2024-04-24 21:21:26.434318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.680 [2024-04-24 21:21:26.434475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.680 [2024-04-24 21:21:26.434503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:12.252 21:21:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:12.252 21:21:26 -- common/autotest_common.sh@850 -- # return 0 00:16:12.252 21:21:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:12.252 21:21:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:12.252 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:16:12.252 21:21:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.252 21:21:26 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:12.252 21:21:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.252 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:16:12.252 [2024-04-24 21:21:26.951154] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.252 21:21:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.252 21:21:26 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:12.252 21:21:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:12.252 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:16:12.252 21:21:26 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:12.252 21:21:26 -- target/host_management.sh@23 -- # cat 00:16:12.252 21:21:26 -- target/host_management.sh@30 -- # rpc_cmd 00:16:12.252 21:21:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.252 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:16:12.252 Malloc0 00:16:12.252 [2024-04-24 21:21:27.027385] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.252 21:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.252 21:21:27 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:12.252 21:21:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:12.252 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:16:12.252 21:21:27 -- target/host_management.sh@73 -- # perfpid=1177728 00:16:12.252 21:21:27 -- target/host_management.sh@74 -- # waitforlisten 1177728 /var/tmp/bdevperf.sock 00:16:12.252 21:21:27 -- common/autotest_common.sh@817 -- # '[' -z 1177728 ']' 00:16:12.252 21:21:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:12.252 21:21:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:12.252 21:21:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:12.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:12.252 21:21:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:12.252 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:16:12.252 21:21:27 -- target/host_management.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:12.252 21:21:27 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:12.252 21:21:27 -- nvmf/common.sh@521 -- # config=() 00:16:12.252 21:21:27 -- nvmf/common.sh@521 -- # local subsystem config 00:16:12.252 21:21:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:12.252 21:21:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:12.252 { 00:16:12.252 "params": { 00:16:12.252 "name": "Nvme$subsystem", 00:16:12.252 "trtype": "$TEST_TRANSPORT", 00:16:12.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:12.252 "adrfam": "ipv4", 00:16:12.252 "trsvcid": "$NVMF_PORT", 00:16:12.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:12.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:12.252 "hdgst": ${hdgst:-false}, 00:16:12.252 "ddgst": ${ddgst:-false} 00:16:12.252 }, 00:16:12.252 "method": "bdev_nvme_attach_controller" 00:16:12.252 } 00:16:12.252 EOF 00:16:12.252 )") 00:16:12.252 21:21:27 -- nvmf/common.sh@543 -- # cat 00:16:12.252 21:21:27 -- nvmf/common.sh@545 -- # jq . 00:16:12.252 21:21:27 -- nvmf/common.sh@546 -- # IFS=, 00:16:12.252 21:21:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:12.252 "params": { 00:16:12.252 "name": "Nvme0", 00:16:12.252 "trtype": "tcp", 00:16:12.252 "traddr": "10.0.0.2", 00:16:12.252 "adrfam": "ipv4", 00:16:12.252 "trsvcid": "4420", 00:16:12.252 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:12.252 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:12.252 "hdgst": false, 00:16:12.252 "ddgst": false 00:16:12.252 }, 00:16:12.252 "method": "bdev_nvme_attach_controller" 00:16:12.252 }' 00:16:12.252 [2024-04-24 21:21:27.162909] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:16:12.252 [2024-04-24 21:21:27.163058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177728 ] 00:16:12.513 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.513 [2024-04-24 21:21:27.295338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.513 [2024-04-24 21:21:27.385684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.774 Running I/O for 10 seconds... 00:16:13.035 21:21:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:13.035 21:21:27 -- common/autotest_common.sh@850 -- # return 0 00:16:13.035 21:21:27 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:13.035 21:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.035 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:16:13.035 21:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.035 21:21:27 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:13.035 21:21:27 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:13.035 21:21:27 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:13.035 21:21:27 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:13.035 21:21:27 -- target/host_management.sh@52 -- # local ret=1 00:16:13.035 21:21:27 -- target/host_management.sh@53 -- # local i 00:16:13.035 21:21:27 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:13.035 21:21:27 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:13.035 21:21:27 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:13.035 21:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.035 21:21:27 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:13.035 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:16:13.035 21:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.035 21:21:27 -- target/host_management.sh@55 -- # read_io_count=515 00:16:13.035 21:21:27 -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:16:13.035 21:21:27 -- target/host_management.sh@59 -- # ret=0 00:16:13.035 21:21:27 -- target/host_management.sh@60 -- # break 00:16:13.035 21:21:27 -- target/host_management.sh@64 -- # return 0 00:16:13.035 21:21:27 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:13.035 21:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.035 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:16:13.035 [2024-04-24 21:21:27.968828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.968887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.968913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.968922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.968933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.968942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.968953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.968961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.968971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.968979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.968988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.035 [2024-04-24 21:21:27.969361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.035 [2024-04-24 21:21:27.969369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.969987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.969997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.970004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.970014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.970022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.970031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.036 [2024-04-24 21:21:27.970039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.036 [2024-04-24 21:21:27.970181] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007e40 was disconnected and freed. reset controller. 00:16:13.036 [2024-04-24 21:21:27.971092] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:13.036 task offset: 76160 on job bdev=Nvme0n1 fails 00:16:13.036 00:16:13.036 Latency(us) 00:16:13.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.037 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:13.037 Job: Nvme0n1 ended in about 0.36 seconds with error 00:16:13.037 Verification LBA range: start 0x0 length 0x400 00:16:13.037 Nvme0n1 : 0.36 1608.47 100.53 178.72 0.00 34913.74 1845.36 32147.13 00:16:13.037 =================================================================================================================== 00:16:13.037 Total : 1608.47 100.53 178.72 0.00 34913.74 1845.36 32147.13 00:16:13.037 21:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.037 [2024-04-24 21:21:27.973700] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:13.037 21:21:27 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:13.037 [2024-04-24 21:21:27.973737] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:16:13.037 21:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.037 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:16:13.037 21:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.037 21:21:27 -- target/host_management.sh@87 -- # sleep 1 00:16:13.295 [2024-04-24 21:21:28.023098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:14.237 21:21:28 -- target/host_management.sh@91 -- # kill -9 1177728 00:16:14.237 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1177728) - No such process 00:16:14.237 21:21:28 -- target/host_management.sh@91 -- # true 00:16:14.237 21:21:28 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:14.237 21:21:28 -- target/host_management.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:14.237 21:21:28 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:14.237 21:21:28 -- nvmf/common.sh@521 -- # config=() 00:16:14.237 21:21:28 -- nvmf/common.sh@521 -- # local subsystem config 00:16:14.237 21:21:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:14.237 21:21:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:14.237 { 00:16:14.237 "params": { 00:16:14.237 "name": "Nvme$subsystem", 00:16:14.237 "trtype": "$TEST_TRANSPORT", 00:16:14.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:14.237 "adrfam": "ipv4", 00:16:14.237 "trsvcid": "$NVMF_PORT", 00:16:14.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:14.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:14.237 "hdgst": ${hdgst:-false}, 00:16:14.237 "ddgst": ${ddgst:-false} 00:16:14.237 }, 00:16:14.237 "method": "bdev_nvme_attach_controller" 00:16:14.237 } 00:16:14.237 EOF 00:16:14.237 )") 00:16:14.237 21:21:28 -- nvmf/common.sh@543 -- # cat 00:16:14.238 21:21:28 -- nvmf/common.sh@545 -- # jq . 00:16:14.238 21:21:28 -- nvmf/common.sh@546 -- # IFS=, 00:16:14.238 21:21:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:14.238 "params": { 00:16:14.238 "name": "Nvme0", 00:16:14.238 "trtype": "tcp", 00:16:14.238 "traddr": "10.0.0.2", 00:16:14.238 "adrfam": "ipv4", 00:16:14.238 "trsvcid": "4420", 00:16:14.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:14.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:14.238 "hdgst": false, 00:16:14.238 "ddgst": false 00:16:14.238 }, 00:16:14.238 "method": "bdev_nvme_attach_controller" 00:16:14.238 }' 00:16:14.238 [2024-04-24 21:21:29.073837] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:16:14.238 [2024-04-24 21:21:29.073989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178047 ] 00:16:14.238 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.498 [2024-04-24 21:21:29.205874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.498 [2024-04-24 21:21:29.298431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.758 Running I/O for 1 seconds... 00:16:16.153 00:16:16.153 Latency(us) 00:16:16.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.153 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:16.153 Verification LBA range: start 0x0 length 0x400 00:16:16.153 Nvme0n1 : 1.01 2087.34 130.46 0.00 0.00 30222.87 1465.94 29387.72 00:16:16.153 =================================================================================================================== 00:16:16.153 Total : 2087.34 130.46 0.00 0.00 30222.87 1465.94 29387.72 00:16:16.153 21:21:31 -- target/host_management.sh@102 -- # stoptarget 00:16:16.153 21:21:31 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:16.153 21:21:31 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:16.153 21:21:31 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:16.153 21:21:31 -- target/host_management.sh@40 -- # nvmftestfini 00:16:16.153 21:21:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:16.153 21:21:31 -- nvmf/common.sh@117 -- # sync 00:16:16.153 21:21:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:16.153 21:21:31 -- nvmf/common.sh@120 -- # set +e 00:16:16.153 21:21:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:16.153 21:21:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:16.153 rmmod nvme_tcp 00:16:16.413 rmmod nvme_fabrics 00:16:16.413 rmmod nvme_keyring 00:16:16.413 21:21:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:16.413 21:21:31 -- nvmf/common.sh@124 -- # set -e 00:16:16.413 21:21:31 -- nvmf/common.sh@125 -- # return 0 00:16:16.413 21:21:31 -- nvmf/common.sh@478 -- # '[' -n 1177402 ']' 00:16:16.413 21:21:31 -- nvmf/common.sh@479 -- # killprocess 1177402 00:16:16.413 21:21:31 -- common/autotest_common.sh@936 -- # '[' -z 1177402 ']' 00:16:16.413 21:21:31 -- common/autotest_common.sh@940 -- # kill -0 1177402 00:16:16.413 21:21:31 -- common/autotest_common.sh@941 -- # uname 00:16:16.413 21:21:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.413 21:21:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1177402 00:16:16.413 21:21:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:16.413 21:21:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:16.413 21:21:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1177402' 00:16:16.413 killing process with pid 1177402 00:16:16.413 21:21:31 -- common/autotest_common.sh@955 -- # kill 1177402 00:16:16.413 21:21:31 -- common/autotest_common.sh@960 -- # wait 1177402 00:16:16.982 [2024-04-24 21:21:31.683594] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:16.982 21:21:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:16.982 21:21:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:16.982 21:21:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:16.982 21:21:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.982 21:21:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:16.982 21:21:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.982 21:21:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.982 21:21:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.888 21:21:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:18.888 00:16:18.888 real 0m7.653s 00:16:18.888 user 0m23.734s 00:16:18.888 sys 0m1.193s 00:16:18.888 21:21:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:18.888 21:21:33 -- common/autotest_common.sh@10 -- # set +x 00:16:18.888 ************************************ 00:16:18.888 END TEST nvmf_host_management 00:16:18.888 ************************************ 00:16:18.888 21:21:33 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:18.888 00:16:18.888 real 0m12.847s 00:16:18.888 user 0m25.167s 00:16:18.888 sys 0m4.918s 00:16:18.888 21:21:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:18.888 21:21:33 -- common/autotest_common.sh@10 -- # set +x 00:16:18.888 ************************************ 00:16:18.888 END TEST nvmf_host_management 00:16:18.888 ************************************ 00:16:19.148 21:21:33 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:19.148 21:21:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:19.148 21:21:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.148 21:21:33 -- common/autotest_common.sh@10 -- # set +x 00:16:19.148 ************************************ 00:16:19.148 START TEST nvmf_lvol 00:16:19.148 ************************************ 00:16:19.148 21:21:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:19.148 * Looking for test storage... 00:16:19.148 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:19.148 21:21:34 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.148 21:21:34 -- nvmf/common.sh@7 -- # uname -s 00:16:19.148 21:21:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.148 21:21:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.148 21:21:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.148 21:21:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.148 21:21:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.148 21:21:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.148 21:21:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.148 21:21:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.148 21:21:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.148 21:21:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.148 21:21:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:16:19.148 21:21:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:16:19.148 21:21:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.148 21:21:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.149 21:21:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:19.149 21:21:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.149 21:21:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:19.149 21:21:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.149 21:21:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.149 21:21:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.149 21:21:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.149 21:21:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.149 21:21:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.149 21:21:34 -- paths/export.sh@5 -- # export PATH 00:16:19.149 21:21:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.149 21:21:34 -- nvmf/common.sh@47 -- # : 0 00:16:19.149 21:21:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.149 21:21:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.149 21:21:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.149 21:21:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.149 21:21:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.149 21:21:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.149 21:21:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.149 21:21:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.149 21:21:34 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:19.149 21:21:34 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:19.149 21:21:34 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:19.149 21:21:34 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:19.149 21:21:34 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:19.149 21:21:34 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:19.149 21:21:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:19.149 21:21:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.149 21:21:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:19.149 21:21:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:19.149 21:21:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:19.149 21:21:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.149 21:21:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.149 21:21:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.149 21:21:34 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:16:19.149 21:21:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:19.149 21:21:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:19.149 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:25.725 21:21:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:25.725 21:21:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:25.725 21:21:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:25.725 21:21:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:25.725 21:21:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:25.725 21:21:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:25.725 21:21:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:25.725 21:21:39 -- nvmf/common.sh@295 -- # net_devs=() 00:16:25.725 21:21:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:25.725 21:21:39 -- nvmf/common.sh@296 -- # e810=() 00:16:25.725 21:21:39 -- nvmf/common.sh@296 -- # local -ga e810 00:16:25.725 21:21:39 -- nvmf/common.sh@297 -- # x722=() 00:16:25.725 21:21:39 -- nvmf/common.sh@297 -- # local -ga x722 00:16:25.725 21:21:39 -- nvmf/common.sh@298 -- # mlx=() 00:16:25.725 21:21:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:25.725 21:21:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.725 21:21:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.725 21:21:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.725 21:21:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.725 21:21:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.725 21:21:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.725 21:21:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.725 21:21:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.725 21:21:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.725 21:21:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.725 21:21:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.725 21:21:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:25.725 21:21:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:25.725 21:21:39 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:25.725 21:21:39 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:25.725 21:21:39 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:25.725 21:21:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:25.725 21:21:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.725 21:21:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:25.725 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:25.725 21:21:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.725 21:21:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.725 21:21:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.725 21:21:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.726 21:21:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.726 21:21:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.726 21:21:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:25.726 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:25.726 21:21:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.726 21:21:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.726 21:21:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.726 21:21:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.726 21:21:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.726 21:21:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:25.726 21:21:39 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:25.726 21:21:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.726 21:21:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.726 21:21:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:25.726 21:21:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.726 21:21:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:25.726 Found net devices under 0000:27:00.0: cvl_0_0 00:16:25.726 21:21:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.726 21:21:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.726 21:21:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.726 21:21:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:25.726 21:21:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.726 21:21:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:25.726 Found net devices under 0000:27:00.1: cvl_0_1 00:16:25.726 21:21:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.726 21:21:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:25.726 21:21:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:25.726 21:21:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:25.726 21:21:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:25.726 21:21:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:25.726 21:21:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.726 21:21:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.726 21:21:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.726 21:21:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:25.726 21:21:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.726 21:21:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.726 21:21:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:25.726 21:21:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.726 21:21:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.726 21:21:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:25.726 21:21:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:25.726 21:21:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.726 21:21:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.726 21:21:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.726 21:21:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.726 21:21:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:25.726 21:21:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.726 21:21:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.726 21:21:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.726 21:21:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:25.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:16:25.726 00:16:25.726 --- 10.0.0.2 ping statistics --- 00:16:25.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.726 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:16:25.726 21:21:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:16:25.726 00:16:25.726 --- 10.0.0.1 ping statistics --- 00:16:25.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.726 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:16:25.726 21:21:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.726 21:21:39 -- nvmf/common.sh@411 -- # return 0 00:16:25.726 21:21:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:25.726 21:21:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.726 21:21:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:25.726 21:21:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:25.726 21:21:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.726 21:21:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:25.726 21:21:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:25.726 21:21:39 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:25.726 21:21:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:25.726 21:21:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:25.726 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:16:25.726 21:21:39 -- nvmf/common.sh@470 -- # nvmfpid=1182562 00:16:25.726 21:21:39 -- nvmf/common.sh@471 -- # waitforlisten 1182562 00:16:25.726 21:21:39 -- common/autotest_common.sh@817 -- # '[' -z 1182562 ']' 00:16:25.726 21:21:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.726 21:21:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:25.726 21:21:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.726 21:21:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:25.726 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:16:25.726 21:21:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:25.726 [2024-04-24 21:21:40.069898] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:16:25.726 [2024-04-24 21:21:40.070007] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.726 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.726 [2024-04-24 21:21:40.187312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:25.726 [2024-04-24 21:21:40.284574] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.726 [2024-04-24 21:21:40.284613] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.726 [2024-04-24 21:21:40.284622] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.726 [2024-04-24 21:21:40.284631] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.726 [2024-04-24 21:21:40.284640] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.726 [2024-04-24 21:21:40.284695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.726 [2024-04-24 21:21:40.284807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.726 [2024-04-24 21:21:40.284812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.987 21:21:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:25.987 21:21:40 -- common/autotest_common.sh@850 -- # return 0 00:16:25.987 21:21:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:25.987 21:21:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:25.987 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:16:25.987 21:21:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.987 21:21:40 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:25.987 [2024-04-24 21:21:40.942759] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.248 21:21:40 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:26.248 21:21:41 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:26.248 21:21:41 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:26.508 21:21:41 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:26.508 21:21:41 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:26.767 21:21:41 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:26.767 21:21:41 -- target/nvmf_lvol.sh@29 -- # lvs=16b69f5a-60e5-4082-8486-7a908409a803 00:16:26.767 21:21:41 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 16b69f5a-60e5-4082-8486-7a908409a803 lvol 20 00:16:27.025 21:21:41 -- target/nvmf_lvol.sh@32 -- # lvol=6a5c6c21-dd6e-4f0a-94f4-524a359d752f 00:16:27.025 21:21:41 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:27.025 21:21:41 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6a5c6c21-dd6e-4f0a-94f4-524a359d752f 00:16:27.285 21:21:42 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:27.285 [2024-04-24 21:21:42.179429] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.285 21:21:42 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:27.545 21:21:42 -- target/nvmf_lvol.sh@42 -- # perf_pid=1183180 00:16:27.545 21:21:42 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:27.545 21:21:42 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:27.545 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.484 21:21:43 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6a5c6c21-dd6e-4f0a-94f4-524a359d752f MY_SNAPSHOT 00:16:28.742 21:21:43 -- target/nvmf_lvol.sh@47 -- # snapshot=88bcdb60-cec7-4124-977c-38acad55dc52 00:16:28.742 21:21:43 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6a5c6c21-dd6e-4f0a-94f4-524a359d752f 30 00:16:29.001 21:21:43 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 88bcdb60-cec7-4124-977c-38acad55dc52 MY_CLONE 00:16:29.001 21:21:43 -- target/nvmf_lvol.sh@49 -- # clone=e2dc43e7-0076-455e-8320-a8c1ff7f725a 00:16:29.001 21:21:43 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e2dc43e7-0076-455e-8320-a8c1ff7f725a 00:16:29.572 21:21:44 -- target/nvmf_lvol.sh@53 -- # wait 1183180 00:16:39.647 Initializing NVMe Controllers 00:16:39.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:39.647 Controller IO queue size 128, less than required. 00:16:39.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:39.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:39.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:39.647 Initialization complete. Launching workers. 00:16:39.647 ======================================================== 00:16:39.647 Latency(us) 00:16:39.647 Device Information : IOPS MiB/s Average min max 00:16:39.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 13772.90 53.80 9296.73 271.73 84507.38 00:16:39.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13684.60 53.46 9356.58 2362.64 60188.94 00:16:39.647 ======================================================== 00:16:39.647 Total : 27457.49 107.26 9326.56 271.73 84507.38 00:16:39.647 00:16:39.647 21:21:52 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:39.647 21:21:52 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6a5c6c21-dd6e-4f0a-94f4-524a359d752f 00:16:39.647 21:21:53 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 16b69f5a-60e5-4082-8486-7a908409a803 00:16:39.647 21:21:53 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:39.647 21:21:53 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:39.647 21:21:53 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:39.647 21:21:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:39.647 21:21:53 -- nvmf/common.sh@117 -- # sync 00:16:39.647 21:21:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:39.647 21:21:53 -- nvmf/common.sh@120 -- # set +e 00:16:39.647 21:21:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:39.647 21:21:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:39.648 rmmod nvme_tcp 00:16:39.648 rmmod nvme_fabrics 00:16:39.648 rmmod nvme_keyring 00:16:39.648 21:21:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:39.648 21:21:53 -- nvmf/common.sh@124 -- # set -e 00:16:39.648 21:21:53 -- nvmf/common.sh@125 -- # return 0 00:16:39.648 21:21:53 -- nvmf/common.sh@478 -- # '[' -n 1182562 ']' 00:16:39.648 21:21:53 -- nvmf/common.sh@479 -- # killprocess 1182562 00:16:39.648 21:21:53 -- common/autotest_common.sh@936 -- # '[' -z 1182562 ']' 00:16:39.648 21:21:53 -- common/autotest_common.sh@940 -- # kill -0 1182562 00:16:39.648 21:21:53 -- common/autotest_common.sh@941 -- # uname 00:16:39.648 21:21:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:39.648 21:21:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1182562 00:16:39.648 21:21:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:39.648 21:21:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:39.648 21:21:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1182562' 00:16:39.648 killing process with pid 1182562 00:16:39.648 21:21:53 -- common/autotest_common.sh@955 -- # kill 1182562 00:16:39.648 21:21:53 -- common/autotest_common.sh@960 -- # wait 1182562 00:16:39.648 21:21:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:39.648 21:21:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:39.648 21:21:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:39.648 21:21:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.648 21:21:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:39.648 21:21:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.648 21:21:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.648 21:21:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.563 21:21:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:41.563 00:16:41.563 real 0m22.064s 00:16:41.563 user 1m3.245s 00:16:41.563 sys 0m6.738s 00:16:41.563 21:21:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:41.563 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:16:41.563 ************************************ 00:16:41.563 END TEST nvmf_lvol 00:16:41.563 ************************************ 00:16:41.563 21:21:56 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:41.563 21:21:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:41.563 21:21:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:41.563 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:16:41.563 ************************************ 00:16:41.563 START TEST nvmf_lvs_grow 00:16:41.563 ************************************ 00:16:41.563 21:21:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:41.563 * Looking for test storage... 00:16:41.563 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:41.563 21:21:56 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.563 21:21:56 -- nvmf/common.sh@7 -- # uname -s 00:16:41.563 21:21:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.563 21:21:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.563 21:21:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.563 21:21:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.563 21:21:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.563 21:21:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.563 21:21:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.563 21:21:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.564 21:21:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.564 21:21:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.564 21:21:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:16:41.564 21:21:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:16:41.564 21:21:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.564 21:21:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.564 21:21:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:41.564 21:21:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.564 21:21:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:41.564 21:21:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.564 21:21:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.564 21:21:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.564 21:21:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.564 21:21:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.564 21:21:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.564 21:21:56 -- paths/export.sh@5 -- # export PATH 00:16:41.564 21:21:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.564 21:21:56 -- nvmf/common.sh@47 -- # : 0 00:16:41.564 21:21:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:41.564 21:21:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:41.564 21:21:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.564 21:21:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.564 21:21:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.564 21:21:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:41.564 21:21:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:41.564 21:21:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:41.564 21:21:56 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:41.564 21:21:56 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:41.564 21:21:56 -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:41.564 21:21:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:41.564 21:21:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.564 21:21:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:41.564 21:21:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:41.564 21:21:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:41.564 21:21:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.564 21:21:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.564 21:21:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.564 21:21:56 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:16:41.564 21:21:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:41.564 21:21:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:41.564 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:16:46.875 21:22:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:46.875 21:22:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:46.875 21:22:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:46.875 21:22:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:46.875 21:22:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:46.875 21:22:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:46.875 21:22:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:46.875 21:22:01 -- nvmf/common.sh@295 -- # net_devs=() 00:16:46.875 21:22:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:46.875 21:22:01 -- nvmf/common.sh@296 -- # e810=() 00:16:46.875 21:22:01 -- nvmf/common.sh@296 -- # local -ga e810 00:16:46.875 21:22:01 -- nvmf/common.sh@297 -- # x722=() 00:16:46.875 21:22:01 -- nvmf/common.sh@297 -- # local -ga x722 00:16:46.875 21:22:01 -- nvmf/common.sh@298 -- # mlx=() 00:16:46.875 21:22:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:46.875 21:22:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.875 21:22:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.875 21:22:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.875 21:22:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.875 21:22:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.875 21:22:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.875 21:22:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.875 21:22:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.875 21:22:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.875 21:22:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.875 21:22:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.875 21:22:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:46.875 21:22:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:46.875 21:22:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.875 21:22:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:46.875 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:46.875 21:22:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.875 21:22:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:46.875 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:46.875 21:22:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:46.875 21:22:01 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:46.875 21:22:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.875 21:22:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.875 21:22:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:46.876 21:22:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.876 21:22:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:46.876 Found net devices under 0000:27:00.0: cvl_0_0 00:16:46.876 21:22:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.876 21:22:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.876 21:22:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.876 21:22:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:46.876 21:22:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.876 21:22:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:46.876 Found net devices under 0000:27:00.1: cvl_0_1 00:16:46.876 21:22:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.876 21:22:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:46.876 21:22:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:46.876 21:22:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:46.876 21:22:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:46.876 21:22:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:46.876 21:22:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.876 21:22:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.876 21:22:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:46.876 21:22:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:46.876 21:22:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:46.876 21:22:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:46.876 21:22:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:46.876 21:22:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:46.876 21:22:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.876 21:22:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:46.876 21:22:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:46.876 21:22:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.876 21:22:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:47.137 21:22:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:47.137 21:22:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:47.137 21:22:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:47.137 21:22:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:47.137 21:22:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:47.137 21:22:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:47.137 21:22:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:47.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:16:47.137 00:16:47.137 --- 10.0.0.2 ping statistics --- 00:16:47.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.137 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:16:47.137 21:22:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:47.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:16:47.137 00:16:47.137 --- 10.0.0.1 ping statistics --- 00:16:47.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.137 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:16:47.137 21:22:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.137 21:22:02 -- nvmf/common.sh@411 -- # return 0 00:16:47.137 21:22:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:47.137 21:22:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.137 21:22:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:47.137 21:22:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:47.137 21:22:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.137 21:22:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:47.137 21:22:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:47.137 21:22:02 -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:47.137 21:22:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:47.137 21:22:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:47.137 21:22:02 -- common/autotest_common.sh@10 -- # set +x 00:16:47.137 21:22:02 -- nvmf/common.sh@470 -- # nvmfpid=1189184 00:16:47.137 21:22:02 -- nvmf/common.sh@471 -- # waitforlisten 1189184 00:16:47.137 21:22:02 -- common/autotest_common.sh@817 -- # '[' -z 1189184 ']' 00:16:47.137 21:22:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.137 21:22:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:47.137 21:22:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.137 21:22:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:47.137 21:22:02 -- common/autotest_common.sh@10 -- # set +x 00:16:47.137 21:22:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:47.398 [2024-04-24 21:22:02.124212] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:16:47.398 [2024-04-24 21:22:02.124290] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.398 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.398 [2024-04-24 21:22:02.221275] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.398 [2024-04-24 21:22:02.315705] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.398 [2024-04-24 21:22:02.315743] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.398 [2024-04-24 21:22:02.315755] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.398 [2024-04-24 21:22:02.315765] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.398 [2024-04-24 21:22:02.315772] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.398 [2024-04-24 21:22:02.315798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.967 21:22:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:47.968 21:22:02 -- common/autotest_common.sh@850 -- # return 0 00:16:47.968 21:22:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:47.968 21:22:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:47.968 21:22:02 -- common/autotest_common.sh@10 -- # set +x 00:16:47.968 21:22:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.968 21:22:02 -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:48.228 [2024-04-24 21:22:03.010472] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.228 21:22:03 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:48.228 21:22:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:48.228 21:22:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:48.228 21:22:03 -- common/autotest_common.sh@10 -- # set +x 00:16:48.228 ************************************ 00:16:48.228 START TEST lvs_grow_clean 00:16:48.228 ************************************ 00:16:48.228 21:22:03 -- common/autotest_common.sh@1111 -- # lvs_grow 00:16:48.228 21:22:03 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:48.228 21:22:03 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:48.228 21:22:03 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:48.228 21:22:03 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:48.228 21:22:03 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:48.228 21:22:03 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:48.228 21:22:03 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:48.228 21:22:03 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:48.228 21:22:03 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:48.489 21:22:03 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:48.489 21:22:03 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:48.749 21:22:03 -- target/nvmf_lvs_grow.sh@28 -- # lvs=0c60fcdd-e039-4507-8f52-501a0a0d47e3 00:16:48.749 21:22:03 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c60fcdd-e039-4507-8f52-501a0a0d47e3 00:16:48.749 21:22:03 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:48.749 21:22:03 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:48.749 21:22:03 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:48.749 21:22:03 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0c60fcdd-e039-4507-8f52-501a0a0d47e3 lvol 150 00:16:49.010 21:22:03 -- target/nvmf_lvs_grow.sh@33 -- # lvol=4e5d36f1-8586-4f0e-9512-e1e1aeb0777b 00:16:49.010 21:22:03 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:49.010 21:22:03 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:49.010 [2024-04-24 21:22:03.866131] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:49.010 [2024-04-24 21:22:03.866203] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:49.010 true 00:16:49.010 21:22:03 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c60fcdd-e039-4507-8f52-501a0a0d47e3 00:16:49.010 21:22:03 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:49.270 21:22:04 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:49.270 21:22:04 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:49.270 21:22:04 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4e5d36f1-8586-4f0e-9512-e1e1aeb0777b 00:16:49.531 21:22:04 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:49.531 [2024-04-24 21:22:04.366561] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.531 21:22:04 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:49.792 21:22:04 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1189828 00:16:49.792 21:22:04 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:49.792 21:22:04 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1189828 /var/tmp/bdevperf.sock 00:16:49.792 21:22:04 -- common/autotest_common.sh@817 -- # '[' -z 1189828 ']' 00:16:49.792 21:22:04 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:49.792 21:22:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:49.792 21:22:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:49.792 21:22:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:49.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:49.792 21:22:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:49.792 21:22:04 -- common/autotest_common.sh@10 -- # set +x 00:16:49.792 [2024-04-24 21:22:04.546070] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:16:49.792 [2024-04-24 21:22:04.546150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189828 ] 00:16:49.792 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.792 [2024-04-24 21:22:04.634428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.792 [2024-04-24 21:22:04.724568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.362 21:22:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:50.362 21:22:05 -- common/autotest_common.sh@850 -- # return 0 00:16:50.362 21:22:05 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:50.621 Nvme0n1 00:16:50.621 21:22:05 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:50.621 [ 00:16:50.621 { 00:16:50.621 "name": "Nvme0n1", 00:16:50.621 "aliases": [ 00:16:50.621 "4e5d36f1-8586-4f0e-9512-e1e1aeb0777b" 00:16:50.621 ], 00:16:50.621 "product_name": "NVMe disk", 00:16:50.621 "block_size": 4096, 00:16:50.621 "num_blocks": 38912, 00:16:50.621 "uuid": "4e5d36f1-8586-4f0e-9512-e1e1aeb0777b", 00:16:50.621 "assigned_rate_limits": { 00:16:50.621 "rw_ios_per_sec": 0, 00:16:50.621 "rw_mbytes_per_sec": 0, 00:16:50.621 "r_mbytes_per_sec": 0, 00:16:50.621 "w_mbytes_per_sec": 0 00:16:50.621 }, 00:16:50.621 "claimed": false, 00:16:50.621 "zoned": false, 00:16:50.621 "supported_io_types": { 00:16:50.621 "read": true, 00:16:50.621 "write": true, 00:16:50.621 "unmap": true, 00:16:50.621 "write_zeroes": true, 00:16:50.621 "flush": true, 00:16:50.621 "reset": true, 00:16:50.621 "compare": true, 00:16:50.621 "compare_and_write": true, 00:16:50.621 "abort": true, 00:16:50.621 "nvme_admin": true, 00:16:50.621 "nvme_io": true 00:16:50.621 }, 00:16:50.621 "memory_domains": [ 00:16:50.621 { 00:16:50.621 "dma_device_id": "system", 00:16:50.621 "dma_device_type": 1 00:16:50.621 } 00:16:50.621 ], 00:16:50.621 "driver_specific": { 00:16:50.621 "nvme": [ 00:16:50.621 { 00:16:50.621 "trid": { 00:16:50.621 "trtype": "TCP", 00:16:50.621 "adrfam": "IPv4", 00:16:50.621 "traddr": "10.0.0.2", 00:16:50.621 "trsvcid": "4420", 00:16:50.621 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:50.621 }, 00:16:50.621 "ctrlr_data": { 00:16:50.621 "cntlid": 1, 00:16:50.621 "vendor_id": "0x8086", 00:16:50.621 "model_number": "SPDK bdev Controller", 00:16:50.621 "serial_number": "SPDK0", 00:16:50.621 "firmware_revision": "24.05", 00:16:50.621 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:50.621 "oacs": { 00:16:50.621 "security": 0, 00:16:50.621 "format": 0, 00:16:50.621 "firmware": 0, 00:16:50.621 "ns_manage": 0 00:16:50.621 }, 00:16:50.621 "multi_ctrlr": true, 00:16:50.621 "ana_reporting": false 00:16:50.621 }, 00:16:50.621 "vs": { 00:16:50.621 "nvme_version": "1.3" 00:16:50.621 }, 00:16:50.621 "ns_data": { 00:16:50.621 "id": 1, 00:16:50.621 "can_share": true 00:16:50.621 } 00:16:50.621 } 00:16:50.621 ], 00:16:50.621 "mp_policy": "active_passive" 00:16:50.621 } 00:16:50.621 } 00:16:50.621 ] 00:16:50.621 21:22:05 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1189917 00:16:50.621 21:22:05 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:50.621 21:22:05 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:50.881 Running I/O for 10 seconds... 00:16:51.819 Latency(us) 00:16:51.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.819 Nvme0n1 : 1.00 22645.00 88.46 0.00 0.00 0.00 0.00 0.00 00:16:51.819 =================================================================================================================== 00:16:51.819 Total : 22645.00 88.46 0.00 0.00 0.00 0.00 0.00 00:16:51.819 00:16:52.757 21:22:07 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0c60fcdd-e039-4507-8f52-501a0a0d47e3 00:16:52.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.757 Nvme0n1 : 2.00 22851.50 89.26 0.00 0.00 0.00 0.00 0.00 00:16:52.757 =================================================================================================================== 00:16:52.757 Total : 22851.50 89.26 0.00 0.00 0.00 0.00 0.00 00:16:52.757 00:16:52.757 true 00:16:52.757 21:22:07 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c60fcdd-e039-4507-8f52-501a0a0d47e3 00:16:52.757 21:22:07 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:53.014 21:22:07 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:53.014 21:22:07 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:53.014 21:22:07 -- target/nvmf_lvs_grow.sh@65 -- # wait 1189917 00:16:53.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.948 Nvme0n1 : 3.00 22911.00 89.50 0.00 0.00 0.00 0.00 0.00 00:16:53.948 =================================================================================================================== 00:16:53.948 Total : 22911.00 89.50 0.00 0.00 0.00 0.00 0.00 00:16:53.948 00:16:54.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.886 Nvme0n1 : 4.00 22901.00 89.46 0.00 0.00 0.00 0.00 0.00 00:16:54.886 =================================================================================================================== 00:16:54.886 Total : 22901.00 89.46 0.00 0.00 0.00 0.00 0.00 00:16:54.886 00:16:55.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.822 Nvme0n1 : 5.00 22882.00 89.38 0.00 0.00 0.00 0.00 0.00 00:16:55.822 =================================================================================================================== 00:16:55.822 Total : 22882.00 89.38 0.00 0.00 0.00 0.00 0.00 00:16:55.822 00:16:56.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.759 Nvme0n1 : 6.00 22921.83 89.54 0.00 0.00 0.00 0.00 0.00 00:16:56.759 =================================================================================================================== 00:16:56.759 Total : 22921.83 89.54 0.00 0.00 0.00 0.00 0.00 00:16:56.759 00:16:57.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.697 Nvme0n1 : 7.00 22949.14 89.65 0.00 0.00 0.00 0.00 0.00 00:16:57.697 =================================================================================================================== 00:16:57.697 Total : 22949.14 89.65 0.00 0.00 0.00 0.00 0.00 00:16:57.697 00:16:58.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.646 Nvme0n1 : 8.00 22965.38 89.71 0.00 0.00 0.00 0.00 0.00 00:16:58.646 =================================================================================================================== 00:16:58.646 Total : 22965.38 89.71 0.00 0.00 0.00 0.00 0.00 00:16:58.646 00:17:00.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.025 Nvme0n1 : 9.00 22987.89 89.80 0.00 0.00 0.00 0.00 0.00 00:17:00.025 =================================================================================================================== 00:17:00.025 Total : 22987.89 89.80 0.00 0.00 0.00 0.00 0.00 00:17:00.025 00:17:00.964 00:17:00.964 Latency(us) 00:17:00.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.964 Nvme0n1 : 10.00 23000.77 89.85 0.00 0.00 5562.61 2035.07 13383.14 00:17:00.964 =================================================================================================================== 00:17:00.964 Total : 23000.77 89.85 0.00 0.00 5562.61 2035.07 13383.14 00:17:00.964 0 00:17:00.964 21:22:15 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1189828 00:17:00.964 21:22:15 -- common/autotest_common.sh@936 -- # '[' -z 1189828 ']' 00:17:00.964 21:22:15 -- common/autotest_common.sh@940 -- # kill -0 1189828 00:17:00.964 21:22:15 -- common/autotest_common.sh@941 -- # uname 00:17:00.964 21:22:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:00.964 21:22:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1189828 00:17:00.964 21:22:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:00.964 21:22:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:00.964 21:22:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1189828' 00:17:00.964 killing process with pid 1189828 00:17:00.964 21:22:15 -- common/autotest_common.sh@955 -- # kill 1189828 00:17:00.964 Received shutdown signal, test time was about 10.000000 seconds 00:17:00.964 00:17:00.965 Latency(us) 00:17:00.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.965 =================================================================================================================== 00:17:00.965 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:00.965 21:22:15 -- common/autotest_common.sh@960 -- # wait 1189828 00:17:01.224 21:22:16 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:01.224 21:22:16 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:01.483 21:22:16 -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c60fcdd-e039-4507-8f52-501a0a0d47e3 00:17:01.483 21:22:16 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:01.741 21:22:16 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:01.741 21:22:16 -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:01.741 21:22:16 -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:01.741 [2024-04-24 21:22:16.642904] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:01.741 21:22:16 -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c60fcdd-e039-4507-8f52-501a0a0d47e3 00:17:01.741 21:22:16 -- common/autotest_common.sh@638 -- # local es=0 00:17:01.741 21:22:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c60fcdd-e039-4507-8f52-501a0a0d47e3 00:17:01.741 21:22:16 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:01.741 21:22:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:01.741 21:22:16 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:01.741 21:22:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:01.741 21:22:16 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:01.741 21:22:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:01.741 21:22:16 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:01.741 21:22:16 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:17:01.741 21:22:16 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c60fcdd-e039-4507-8f52-501a0a0d47e3 00:17:01.999 request: 00:17:01.999 { 00:17:01.999 "uuid": "0c60fcdd-e039-4507-8f52-501a0a0d47e3", 00:17:01.999 "method": "bdev_lvol_get_lvstores", 00:17:01.999 "req_id": 1 00:17:01.999 } 00:17:01.999 Got JSON-RPC error response 00:17:01.999 response: 00:17:01.999 { 00:17:01.999 "code": -19, 00:17:01.999 "message": "No such device" 00:17:01.999 } 00:17:01.999 21:22:16 -- common/autotest_common.sh@641 -- # es=1 00:17:01.999 21:22:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:01.999 21:22:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:01.999 21:22:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:01.999 21:22:16 -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:01.999 aio_bdev 00:17:01.999 21:22:16 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4e5d36f1-8586-4f0e-9512-e1e1aeb0777b 00:17:01.999 21:22:16 -- common/autotest_common.sh@885 -- # local bdev_name=4e5d36f1-8586-4f0e-9512-e1e1aeb0777b 00:17:01.999 21:22:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:01.999 21:22:16 -- common/autotest_common.sh@887 -- # local i 00:17:01.999 21:22:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:01.999 21:22:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:01.999 21:22:16 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:02.258 21:22:17 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4e5d36f1-8586-4f0e-9512-e1e1aeb0777b -t 2000 00:17:02.258 [ 00:17:02.258 { 00:17:02.258 "name": "4e5d36f1-8586-4f0e-9512-e1e1aeb0777b", 00:17:02.258 "aliases": [ 00:17:02.258 "lvs/lvol" 00:17:02.258 ], 00:17:02.258 "product_name": "Logical Volume", 00:17:02.258 "block_size": 4096, 00:17:02.258 "num_blocks": 38912, 00:17:02.258 "uuid": "4e5d36f1-8586-4f0e-9512-e1e1aeb0777b", 00:17:02.258 "assigned_rate_limits": { 00:17:02.258 "rw_ios_per_sec": 0, 00:17:02.258 "rw_mbytes_per_sec": 0, 00:17:02.258 "r_mbytes_per_sec": 0, 00:17:02.258 "w_mbytes_per_sec": 0 00:17:02.258 }, 00:17:02.258 "claimed": false, 00:17:02.258 "zoned": false, 00:17:02.258 "supported_io_types": { 00:17:02.258 "read": true, 00:17:02.258 "write": true, 00:17:02.258 "unmap": true, 00:17:02.258 "write_zeroes": true, 00:17:02.258 "flush": false, 00:17:02.258 "reset": true, 00:17:02.258 "compare": false, 00:17:02.258 "compare_and_write": false, 00:17:02.258 "abort": false, 00:17:02.258 "nvme_admin": false, 00:17:02.258 "nvme_io": false 00:17:02.258 }, 00:17:02.258 "driver_specific": { 00:17:02.258 "lvol": { 00:17:02.258 "lvol_store_uuid": "0c60fcdd-e039-4507-8f52-501a0a0d47e3", 00:17:02.258 "base_bdev": "aio_bdev", 00:17:02.258 "thin_provision": false, 00:17:02.258 "snapshot": false, 00:17:02.258 "clone": false, 00:17:02.258 "esnap_clone": false 00:17:02.258 } 00:17:02.258 } 00:17:02.258 } 00:17:02.258 ] 00:17:02.258 21:22:17 -- common/autotest_common.sh@893 -- # return 0 00:17:02.258 21:22:17 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c60fcdd-e039-4507-8f52-501a0a0d47e3 00:17:02.258 21:22:17 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:02.519 21:22:17 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:02.519 21:22:17 -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0c60fcdd-e039-4507-8f52-501a0a0d47e3 00:17:02.519 21:22:17 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:02.519 21:22:17 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:02.519 21:22:17 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4e5d36f1-8586-4f0e-9512-e1e1aeb0777b 00:17:02.778 21:22:17 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c60fcdd-e039-4507-8f52-501a0a0d47e3 00:17:03.037 21:22:17 -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:03.037 21:22:17 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:03.037 00:17:03.037 real 0m14.744s 00:17:03.037 user 0m14.325s 00:17:03.037 sys 0m1.109s 00:17:03.037 21:22:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:03.037 21:22:17 -- common/autotest_common.sh@10 -- # set +x 00:17:03.037 ************************************ 00:17:03.037 END TEST lvs_grow_clean 00:17:03.037 ************************************ 00:17:03.037 21:22:17 -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:03.037 21:22:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:03.037 21:22:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:03.037 21:22:17 -- common/autotest_common.sh@10 -- # set +x 00:17:03.295 ************************************ 00:17:03.295 START TEST lvs_grow_dirty 00:17:03.295 ************************************ 00:17:03.295 21:22:18 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:17:03.295 21:22:18 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:03.295 21:22:18 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:03.295 21:22:18 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:03.295 21:22:18 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:03.296 21:22:18 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:03.296 21:22:18 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:03.296 21:22:18 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:03.296 21:22:18 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:03.296 21:22:18 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:03.296 21:22:18 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:03.296 21:22:18 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:03.553 21:22:18 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:03.553 21:22:18 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:03.553 21:22:18 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:03.553 21:22:18 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:03.553 21:22:18 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:03.553 21:22:18 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e36f242a-9a32-4654-b6c3-0fc78c48be0f lvol 150 00:17:03.811 21:22:18 -- target/nvmf_lvs_grow.sh@33 -- # lvol=a3819b05-07f2-4cac-9f22-6bd6a3a407c0 00:17:03.811 21:22:18 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:03.811 21:22:18 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:03.811 [2024-04-24 21:22:18.754083] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:03.811 [2024-04-24 21:22:18.754148] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:03.811 true 00:17:03.811 21:22:18 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:03.811 21:22:18 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:04.071 21:22:18 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:04.071 21:22:18 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:04.071 21:22:19 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a3819b05-07f2-4cac-9f22-6bd6a3a407c0 00:17:04.330 21:22:19 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:04.330 [2024-04-24 21:22:19.254472] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.330 21:22:19 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:04.588 21:22:19 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1192623 00:17:04.589 21:22:19 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:04.589 21:22:19 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:04.589 21:22:19 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1192623 /var/tmp/bdevperf.sock 00:17:04.589 21:22:19 -- common/autotest_common.sh@817 -- # '[' -z 1192623 ']' 00:17:04.589 21:22:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.589 21:22:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:04.589 21:22:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.589 21:22:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:04.589 21:22:19 -- common/autotest_common.sh@10 -- # set +x 00:17:04.589 [2024-04-24 21:22:19.438153] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:17:04.589 [2024-04-24 21:22:19.438238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192623 ] 00:17:04.589 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.589 [2024-04-24 21:22:19.526449] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.847 [2024-04-24 21:22:19.618574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.418 21:22:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:05.418 21:22:20 -- common/autotest_common.sh@850 -- # return 0 00:17:05.418 21:22:20 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:05.683 Nvme0n1 00:17:05.683 21:22:20 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:05.683 [ 00:17:05.683 { 00:17:05.683 "name": "Nvme0n1", 00:17:05.683 "aliases": [ 00:17:05.683 "a3819b05-07f2-4cac-9f22-6bd6a3a407c0" 00:17:05.683 ], 00:17:05.683 "product_name": "NVMe disk", 00:17:05.683 "block_size": 4096, 00:17:05.683 "num_blocks": 38912, 00:17:05.683 "uuid": "a3819b05-07f2-4cac-9f22-6bd6a3a407c0", 00:17:05.683 "assigned_rate_limits": { 00:17:05.683 "rw_ios_per_sec": 0, 00:17:05.683 "rw_mbytes_per_sec": 0, 00:17:05.683 "r_mbytes_per_sec": 0, 00:17:05.683 "w_mbytes_per_sec": 0 00:17:05.683 }, 00:17:05.683 "claimed": false, 00:17:05.683 "zoned": false, 00:17:05.683 "supported_io_types": { 00:17:05.683 "read": true, 00:17:05.683 "write": true, 00:17:05.683 "unmap": true, 00:17:05.683 "write_zeroes": true, 00:17:05.683 "flush": true, 00:17:05.683 "reset": true, 00:17:05.683 "compare": true, 00:17:05.683 "compare_and_write": true, 00:17:05.683 "abort": true, 00:17:05.683 "nvme_admin": true, 00:17:05.683 "nvme_io": true 00:17:05.683 }, 00:17:05.683 "memory_domains": [ 00:17:05.683 { 00:17:05.683 "dma_device_id": "system", 00:17:05.683 "dma_device_type": 1 00:17:05.683 } 00:17:05.683 ], 00:17:05.683 "driver_specific": { 00:17:05.683 "nvme": [ 00:17:05.683 { 00:17:05.683 "trid": { 00:17:05.683 "trtype": "TCP", 00:17:05.683 "adrfam": "IPv4", 00:17:05.683 "traddr": "10.0.0.2", 00:17:05.683 "trsvcid": "4420", 00:17:05.683 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:05.683 }, 00:17:05.683 "ctrlr_data": { 00:17:05.683 "cntlid": 1, 00:17:05.683 "vendor_id": "0x8086", 00:17:05.683 "model_number": "SPDK bdev Controller", 00:17:05.683 "serial_number": "SPDK0", 00:17:05.683 "firmware_revision": "24.05", 00:17:05.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:05.683 "oacs": { 00:17:05.683 "security": 0, 00:17:05.683 "format": 0, 00:17:05.683 "firmware": 0, 00:17:05.683 "ns_manage": 0 00:17:05.683 }, 00:17:05.683 "multi_ctrlr": true, 00:17:05.683 "ana_reporting": false 00:17:05.683 }, 00:17:05.683 "vs": { 00:17:05.683 "nvme_version": "1.3" 00:17:05.683 }, 00:17:05.683 "ns_data": { 00:17:05.683 "id": 1, 00:17:05.683 "can_share": true 00:17:05.683 } 00:17:05.683 } 00:17:05.683 ], 00:17:05.683 "mp_policy": "active_passive" 00:17:05.683 } 00:17:05.683 } 00:17:05.683 ] 00:17:05.683 21:22:20 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1192904 00:17:05.683 21:22:20 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:05.683 21:22:20 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:05.943 Running I/O for 10 seconds... 00:17:06.881 Latency(us) 00:17:06.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.881 Nvme0n1 : 1.00 22711.00 88.71 0.00 0.00 0.00 0.00 0.00 00:17:06.881 =================================================================================================================== 00:17:06.881 Total : 22711.00 88.71 0.00 0.00 0.00 0.00 0.00 00:17:06.881 00:17:07.826 21:22:22 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:07.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.826 Nvme0n1 : 2.00 22914.00 89.51 0.00 0.00 0.00 0.00 0.00 00:17:07.826 =================================================================================================================== 00:17:07.826 Total : 22914.00 89.51 0.00 0.00 0.00 0.00 0.00 00:17:07.826 00:17:07.826 true 00:17:07.826 21:22:22 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:07.826 21:22:22 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:08.206 21:22:22 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:08.206 21:22:22 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:08.206 21:22:22 -- target/nvmf_lvs_grow.sh@65 -- # wait 1192904 00:17:08.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.773 Nvme0n1 : 3.00 22975.00 89.75 0.00 0.00 0.00 0.00 0.00 00:17:08.773 =================================================================================================================== 00:17:08.773 Total : 22975.00 89.75 0.00 0.00 0.00 0.00 0.00 00:17:08.773 00:17:10.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.151 Nvme0n1 : 4.00 23014.25 89.90 0.00 0.00 0.00 0.00 0.00 00:17:10.151 =================================================================================================================== 00:17:10.151 Total : 23014.25 89.90 0.00 0.00 0.00 0.00 0.00 00:17:10.151 00:17:11.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.090 Nvme0n1 : 5.00 23061.60 90.08 0.00 0.00 0.00 0.00 0.00 00:17:11.090 =================================================================================================================== 00:17:11.090 Total : 23061.60 90.08 0.00 0.00 0.00 0.00 0.00 00:17:11.090 00:17:12.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.026 Nvme0n1 : 6.00 23072.83 90.13 0.00 0.00 0.00 0.00 0.00 00:17:12.026 =================================================================================================================== 00:17:12.026 Total : 23072.83 90.13 0.00 0.00 0.00 0.00 0.00 00:17:12.026 00:17:12.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.963 Nvme0n1 : 7.00 23106.57 90.26 0.00 0.00 0.00 0.00 0.00 00:17:12.963 =================================================================================================================== 00:17:12.963 Total : 23106.57 90.26 0.00 0.00 0.00 0.00 0.00 00:17:12.963 00:17:13.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.905 Nvme0n1 : 8.00 23122.00 90.32 0.00 0.00 0.00 0.00 0.00 00:17:13.905 =================================================================================================================== 00:17:13.905 Total : 23122.00 90.32 0.00 0.00 0.00 0.00 0.00 00:17:13.905 00:17:14.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.845 Nvme0n1 : 9.00 23123.00 90.32 0.00 0.00 0.00 0.00 0.00 00:17:14.845 =================================================================================================================== 00:17:14.845 Total : 23123.00 90.32 0.00 0.00 0.00 0.00 0.00 00:17:14.845 00:17:15.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.782 Nvme0n1 : 10.00 23097.70 90.23 0.00 0.00 0.00 0.00 0.00 00:17:15.782 =================================================================================================================== 00:17:15.782 Total : 23097.70 90.23 0.00 0.00 0.00 0.00 0.00 00:17:15.782 00:17:15.782 00:17:15.782 Latency(us) 00:17:15.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.782 Nvme0n1 : 10.00 23100.71 90.24 0.00 0.00 5538.04 3449.26 13797.05 00:17:15.782 =================================================================================================================== 00:17:15.782 Total : 23100.71 90.24 0.00 0.00 5538.04 3449.26 13797.05 00:17:15.782 0 00:17:15.782 21:22:30 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1192623 00:17:15.782 21:22:30 -- common/autotest_common.sh@936 -- # '[' -z 1192623 ']' 00:17:15.782 21:22:30 -- common/autotest_common.sh@940 -- # kill -0 1192623 00:17:15.782 21:22:30 -- common/autotest_common.sh@941 -- # uname 00:17:15.782 21:22:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:15.782 21:22:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1192623 00:17:16.042 21:22:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:16.042 21:22:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:16.042 21:22:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1192623' 00:17:16.042 killing process with pid 1192623 00:17:16.042 21:22:30 -- common/autotest_common.sh@955 -- # kill 1192623 00:17:16.042 Received shutdown signal, test time was about 10.000000 seconds 00:17:16.042 00:17:16.042 Latency(us) 00:17:16.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.042 =================================================================================================================== 00:17:16.042 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.042 21:22:30 -- common/autotest_common.sh@960 -- # wait 1192623 00:17:16.302 21:22:31 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:16.302 21:22:31 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:16.562 21:22:31 -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:16.562 21:22:31 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:16.562 21:22:31 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:16.562 21:22:31 -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:16.563 21:22:31 -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1189184 00:17:16.563 21:22:31 -- target/nvmf_lvs_grow.sh@75 -- # wait 1189184 00:17:16.821 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1189184 Killed "${NVMF_APP[@]}" "$@" 00:17:16.821 21:22:31 -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:16.821 21:22:31 -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:16.821 21:22:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:16.821 21:22:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:16.821 21:22:31 -- common/autotest_common.sh@10 -- # set +x 00:17:16.821 21:22:31 -- nvmf/common.sh@470 -- # nvmfpid=1195003 00:17:16.821 21:22:31 -- nvmf/common.sh@471 -- # waitforlisten 1195003 00:17:16.821 21:22:31 -- common/autotest_common.sh@817 -- # '[' -z 1195003 ']' 00:17:16.821 21:22:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.821 21:22:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:16.821 21:22:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.821 21:22:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:16.821 21:22:31 -- common/autotest_common.sh@10 -- # set +x 00:17:16.821 21:22:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:16.821 [2024-04-24 21:22:31.635901] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:17:16.821 [2024-04-24 21:22:31.636007] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.821 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.821 [2024-04-24 21:22:31.759579] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.079 [2024-04-24 21:22:31.855178] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.079 [2024-04-24 21:22:31.855214] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.079 [2024-04-24 21:22:31.855223] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.079 [2024-04-24 21:22:31.855233] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.079 [2024-04-24 21:22:31.855242] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.079 [2024-04-24 21:22:31.855273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.649 21:22:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:17.649 21:22:32 -- common/autotest_common.sh@850 -- # return 0 00:17:17.649 21:22:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:17.649 21:22:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:17.649 21:22:32 -- common/autotest_common.sh@10 -- # set +x 00:17:17.649 21:22:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.649 21:22:32 -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:17.649 [2024-04-24 21:22:32.481964] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:17.649 [2024-04-24 21:22:32.482103] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:17.649 [2024-04-24 21:22:32.482132] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:17.649 21:22:32 -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:17.649 21:22:32 -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a3819b05-07f2-4cac-9f22-6bd6a3a407c0 00:17:17.649 21:22:32 -- common/autotest_common.sh@885 -- # local bdev_name=a3819b05-07f2-4cac-9f22-6bd6a3a407c0 00:17:17.649 21:22:32 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:17.649 21:22:32 -- common/autotest_common.sh@887 -- # local i 00:17:17.649 21:22:32 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:17.649 21:22:32 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:17.649 21:22:32 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:17.910 21:22:32 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a3819b05-07f2-4cac-9f22-6bd6a3a407c0 -t 2000 00:17:17.910 [ 00:17:17.910 { 00:17:17.910 "name": "a3819b05-07f2-4cac-9f22-6bd6a3a407c0", 00:17:17.910 "aliases": [ 00:17:17.910 "lvs/lvol" 00:17:17.910 ], 00:17:17.910 "product_name": "Logical Volume", 00:17:17.910 "block_size": 4096, 00:17:17.910 "num_blocks": 38912, 00:17:17.910 "uuid": "a3819b05-07f2-4cac-9f22-6bd6a3a407c0", 00:17:17.910 "assigned_rate_limits": { 00:17:17.910 "rw_ios_per_sec": 0, 00:17:17.910 "rw_mbytes_per_sec": 0, 00:17:17.910 "r_mbytes_per_sec": 0, 00:17:17.910 "w_mbytes_per_sec": 0 00:17:17.910 }, 00:17:17.910 "claimed": false, 00:17:17.910 "zoned": false, 00:17:17.910 "supported_io_types": { 00:17:17.910 "read": true, 00:17:17.910 "write": true, 00:17:17.910 "unmap": true, 00:17:17.910 "write_zeroes": true, 00:17:17.910 "flush": false, 00:17:17.910 "reset": true, 00:17:17.910 "compare": false, 00:17:17.910 "compare_and_write": false, 00:17:17.910 "abort": false, 00:17:17.910 "nvme_admin": false, 00:17:17.910 "nvme_io": false 00:17:17.910 }, 00:17:17.910 "driver_specific": { 00:17:17.910 "lvol": { 00:17:17.910 "lvol_store_uuid": "e36f242a-9a32-4654-b6c3-0fc78c48be0f", 00:17:17.910 "base_bdev": "aio_bdev", 00:17:17.910 "thin_provision": false, 00:17:17.910 "snapshot": false, 00:17:17.910 "clone": false, 00:17:17.910 "esnap_clone": false 00:17:17.910 } 00:17:17.910 } 00:17:17.910 } 00:17:17.910 ] 00:17:17.910 21:22:32 -- common/autotest_common.sh@893 -- # return 0 00:17:17.910 21:22:32 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:17.910 21:22:32 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:18.170 21:22:32 -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:18.170 21:22:32 -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:18.170 21:22:32 -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:18.170 21:22:33 -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:18.170 21:22:33 -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:18.170 [2024-04-24 21:22:33.128086] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:18.429 21:22:33 -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:18.429 21:22:33 -- common/autotest_common.sh@638 -- # local es=0 00:17:18.429 21:22:33 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:18.429 21:22:33 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:18.429 21:22:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:18.429 21:22:33 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:18.429 21:22:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:18.429 21:22:33 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:18.429 21:22:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:18.429 21:22:33 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:18.429 21:22:33 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:17:18.429 21:22:33 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:18.429 request: 00:17:18.429 { 00:17:18.429 "uuid": "e36f242a-9a32-4654-b6c3-0fc78c48be0f", 00:17:18.429 "method": "bdev_lvol_get_lvstores", 00:17:18.429 "req_id": 1 00:17:18.429 } 00:17:18.429 Got JSON-RPC error response 00:17:18.429 response: 00:17:18.429 { 00:17:18.429 "code": -19, 00:17:18.429 "message": "No such device" 00:17:18.429 } 00:17:18.429 21:22:33 -- common/autotest_common.sh@641 -- # es=1 00:17:18.429 21:22:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:18.429 21:22:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:18.429 21:22:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:18.429 21:22:33 -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:18.687 aio_bdev 00:17:18.687 21:22:33 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a3819b05-07f2-4cac-9f22-6bd6a3a407c0 00:17:18.687 21:22:33 -- common/autotest_common.sh@885 -- # local bdev_name=a3819b05-07f2-4cac-9f22-6bd6a3a407c0 00:17:18.687 21:22:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:18.687 21:22:33 -- common/autotest_common.sh@887 -- # local i 00:17:18.687 21:22:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:18.687 21:22:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:18.687 21:22:33 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:18.687 21:22:33 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a3819b05-07f2-4cac-9f22-6bd6a3a407c0 -t 2000 00:17:18.687 [ 00:17:18.687 { 00:17:18.687 "name": "a3819b05-07f2-4cac-9f22-6bd6a3a407c0", 00:17:18.687 "aliases": [ 00:17:18.687 "lvs/lvol" 00:17:18.687 ], 00:17:18.687 "product_name": "Logical Volume", 00:17:18.687 "block_size": 4096, 00:17:18.687 "num_blocks": 38912, 00:17:18.687 "uuid": "a3819b05-07f2-4cac-9f22-6bd6a3a407c0", 00:17:18.687 "assigned_rate_limits": { 00:17:18.687 "rw_ios_per_sec": 0, 00:17:18.687 "rw_mbytes_per_sec": 0, 00:17:18.687 "r_mbytes_per_sec": 0, 00:17:18.687 "w_mbytes_per_sec": 0 00:17:18.687 }, 00:17:18.687 "claimed": false, 00:17:18.687 "zoned": false, 00:17:18.688 "supported_io_types": { 00:17:18.688 "read": true, 00:17:18.688 "write": true, 00:17:18.688 "unmap": true, 00:17:18.688 "write_zeroes": true, 00:17:18.688 "flush": false, 00:17:18.688 "reset": true, 00:17:18.688 "compare": false, 00:17:18.688 "compare_and_write": false, 00:17:18.688 "abort": false, 00:17:18.688 "nvme_admin": false, 00:17:18.688 "nvme_io": false 00:17:18.688 }, 00:17:18.688 "driver_specific": { 00:17:18.688 "lvol": { 00:17:18.688 "lvol_store_uuid": "e36f242a-9a32-4654-b6c3-0fc78c48be0f", 00:17:18.688 "base_bdev": "aio_bdev", 00:17:18.688 "thin_provision": false, 00:17:18.688 "snapshot": false, 00:17:18.688 "clone": false, 00:17:18.688 "esnap_clone": false 00:17:18.688 } 00:17:18.688 } 00:17:18.688 } 00:17:18.688 ] 00:17:18.946 21:22:33 -- common/autotest_common.sh@893 -- # return 0 00:17:18.946 21:22:33 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:18.946 21:22:33 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:18.946 21:22:33 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:18.946 21:22:33 -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:18.946 21:22:33 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:19.207 21:22:33 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:19.207 21:22:33 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a3819b05-07f2-4cac-9f22-6bd6a3a407c0 00:17:19.207 21:22:34 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e36f242a-9a32-4654-b6c3-0fc78c48be0f 00:17:19.468 21:22:34 -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:19.468 21:22:34 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:19.468 00:17:19.468 real 0m16.332s 00:17:19.468 user 0m42.706s 00:17:19.468 sys 0m2.989s 00:17:19.468 21:22:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:19.468 21:22:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.468 ************************************ 00:17:19.468 END TEST lvs_grow_dirty 00:17:19.468 ************************************ 00:17:19.468 21:22:34 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:19.468 21:22:34 -- common/autotest_common.sh@794 -- # type=--id 00:17:19.468 21:22:34 -- common/autotest_common.sh@795 -- # id=0 00:17:19.468 21:22:34 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:17:19.468 21:22:34 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:19.468 21:22:34 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:17:19.468 21:22:34 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:17:19.468 21:22:34 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:17:19.468 21:22:34 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:19.468 nvmf_trace.0 00:17:19.468 21:22:34 -- common/autotest_common.sh@809 -- # return 0 00:17:19.468 21:22:34 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:19.468 21:22:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:19.468 21:22:34 -- nvmf/common.sh@117 -- # sync 00:17:19.468 21:22:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.468 21:22:34 -- nvmf/common.sh@120 -- # set +e 00:17:19.468 21:22:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.468 21:22:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.468 rmmod nvme_tcp 00:17:19.729 rmmod nvme_fabrics 00:17:19.729 rmmod nvme_keyring 00:17:19.729 21:22:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.729 21:22:34 -- nvmf/common.sh@124 -- # set -e 00:17:19.729 21:22:34 -- nvmf/common.sh@125 -- # return 0 00:17:19.729 21:22:34 -- nvmf/common.sh@478 -- # '[' -n 1195003 ']' 00:17:19.729 21:22:34 -- nvmf/common.sh@479 -- # killprocess 1195003 00:17:19.729 21:22:34 -- common/autotest_common.sh@936 -- # '[' -z 1195003 ']' 00:17:19.729 21:22:34 -- common/autotest_common.sh@940 -- # kill -0 1195003 00:17:19.729 21:22:34 -- common/autotest_common.sh@941 -- # uname 00:17:19.729 21:22:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.729 21:22:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1195003 00:17:19.729 21:22:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:19.729 21:22:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:19.729 21:22:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1195003' 00:17:19.729 killing process with pid 1195003 00:17:19.729 21:22:34 -- common/autotest_common.sh@955 -- # kill 1195003 00:17:19.729 21:22:34 -- common/autotest_common.sh@960 -- # wait 1195003 00:17:20.297 21:22:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:20.297 21:22:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:20.297 21:22:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:20.297 21:22:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.297 21:22:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.297 21:22:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.297 21:22:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.297 21:22:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.204 21:22:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:22.204 00:17:22.204 real 0m40.869s 00:17:22.204 user 1m2.246s 00:17:22.204 sys 0m9.002s 00:17:22.204 21:22:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:22.204 21:22:37 -- common/autotest_common.sh@10 -- # set +x 00:17:22.204 ************************************ 00:17:22.204 END TEST nvmf_lvs_grow 00:17:22.204 ************************************ 00:17:22.204 21:22:37 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:22.204 21:22:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:22.204 21:22:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:22.204 21:22:37 -- common/autotest_common.sh@10 -- # set +x 00:17:22.204 ************************************ 00:17:22.204 START TEST nvmf_bdev_io_wait 00:17:22.204 ************************************ 00:17:22.204 21:22:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:22.466 * Looking for test storage... 00:17:22.466 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:22.467 21:22:37 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.467 21:22:37 -- nvmf/common.sh@7 -- # uname -s 00:17:22.467 21:22:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.467 21:22:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.467 21:22:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.467 21:22:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.467 21:22:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.467 21:22:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.467 21:22:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.467 21:22:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.467 21:22:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.467 21:22:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.467 21:22:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:17:22.467 21:22:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:17:22.467 21:22:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.467 21:22:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.467 21:22:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:22.467 21:22:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.467 21:22:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:22.467 21:22:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.467 21:22:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.467 21:22:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.467 21:22:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.467 21:22:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.467 21:22:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.467 21:22:37 -- paths/export.sh@5 -- # export PATH 00:17:22.467 21:22:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.467 21:22:37 -- nvmf/common.sh@47 -- # : 0 00:17:22.467 21:22:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.467 21:22:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.467 21:22:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.467 21:22:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.467 21:22:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.467 21:22:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.467 21:22:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.467 21:22:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.467 21:22:37 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:22.467 21:22:37 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:22.467 21:22:37 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:22.467 21:22:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:22.467 21:22:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.467 21:22:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:22.467 21:22:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:22.467 21:22:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:22.467 21:22:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.467 21:22:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.467 21:22:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.467 21:22:37 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:17:22.467 21:22:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:22.467 21:22:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:22.467 21:22:37 -- common/autotest_common.sh@10 -- # set +x 00:17:29.048 21:22:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:29.048 21:22:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:29.048 21:22:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:29.048 21:22:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:29.048 21:22:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:29.048 21:22:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:29.048 21:22:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:29.048 21:22:42 -- nvmf/common.sh@295 -- # net_devs=() 00:17:29.048 21:22:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:29.048 21:22:42 -- nvmf/common.sh@296 -- # e810=() 00:17:29.048 21:22:42 -- nvmf/common.sh@296 -- # local -ga e810 00:17:29.048 21:22:42 -- nvmf/common.sh@297 -- # x722=() 00:17:29.048 21:22:42 -- nvmf/common.sh@297 -- # local -ga x722 00:17:29.049 21:22:42 -- nvmf/common.sh@298 -- # mlx=() 00:17:29.049 21:22:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:29.049 21:22:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.049 21:22:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.049 21:22:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.049 21:22:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.049 21:22:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.049 21:22:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.049 21:22:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.049 21:22:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.049 21:22:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.049 21:22:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.049 21:22:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.049 21:22:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:29.049 21:22:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:29.049 21:22:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.049 21:22:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:29.049 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:29.049 21:22:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.049 21:22:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:29.049 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:29.049 21:22:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:29.049 21:22:42 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.049 21:22:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.049 21:22:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:29.049 21:22:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.049 21:22:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:29.049 Found net devices under 0000:27:00.0: cvl_0_0 00:17:29.049 21:22:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.049 21:22:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.049 21:22:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.049 21:22:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:29.049 21:22:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.049 21:22:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:29.049 Found net devices under 0000:27:00.1: cvl_0_1 00:17:29.049 21:22:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.049 21:22:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:29.049 21:22:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:29.049 21:22:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:29.049 21:22:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:29.049 21:22:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.049 21:22:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.049 21:22:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.049 21:22:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:29.049 21:22:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.049 21:22:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.049 21:22:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:29.049 21:22:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.049 21:22:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.049 21:22:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:29.049 21:22:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:29.049 21:22:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.049 21:22:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.049 21:22:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.049 21:22:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.049 21:22:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:29.049 21:22:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.049 21:22:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.049 21:22:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.049 21:22:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:29.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.733 ms 00:17:29.049 00:17:29.049 --- 10.0.0.2 ping statistics --- 00:17:29.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.049 rtt min/avg/max/mdev = 0.733/0.733/0.733/0.000 ms 00:17:29.049 21:22:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:17:29.049 00:17:29.049 --- 10.0.0.1 ping statistics --- 00:17:29.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.049 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:17:29.049 21:22:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.049 21:22:43 -- nvmf/common.sh@411 -- # return 0 00:17:29.049 21:22:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:29.049 21:22:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.049 21:22:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:29.049 21:22:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:29.049 21:22:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.049 21:22:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:29.049 21:22:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:29.049 21:22:43 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:29.049 21:22:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:29.049 21:22:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:29.049 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:17:29.049 21:22:43 -- nvmf/common.sh@470 -- # nvmfpid=1199600 00:17:29.049 21:22:43 -- nvmf/common.sh@471 -- # waitforlisten 1199600 00:17:29.049 21:22:43 -- common/autotest_common.sh@817 -- # '[' -z 1199600 ']' 00:17:29.049 21:22:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.049 21:22:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:29.049 21:22:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:29.049 21:22:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.049 21:22:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:29.049 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:17:29.049 [2024-04-24 21:22:43.138604] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:17:29.049 [2024-04-24 21:22:43.138712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.049 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.049 [2024-04-24 21:22:43.266719] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:29.049 [2024-04-24 21:22:43.369568] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.049 [2024-04-24 21:22:43.369609] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.049 [2024-04-24 21:22:43.369621] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.049 [2024-04-24 21:22:43.369631] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.049 [2024-04-24 21:22:43.369639] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.049 [2024-04-24 21:22:43.369700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.050 [2024-04-24 21:22:43.369721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.050 [2024-04-24 21:22:43.369829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.050 [2024-04-24 21:22:43.369839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:29.050 21:22:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:29.050 21:22:43 -- common/autotest_common.sh@850 -- # return 0 00:17:29.050 21:22:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:29.050 21:22:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:29.050 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:17:29.050 21:22:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.050 21:22:43 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:29.050 21:22:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.050 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:17:29.050 21:22:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.050 21:22:43 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:29.050 21:22:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.050 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:17:29.050 21:22:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.050 21:22:43 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:29.050 21:22:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.050 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:17:29.050 [2024-04-24 21:22:43.977696] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.050 21:22:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.050 21:22:43 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:29.050 21:22:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.050 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:17:29.311 Malloc0 00:17:29.311 21:22:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:29.311 21:22:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.311 21:22:44 -- common/autotest_common.sh@10 -- # set +x 00:17:29.311 21:22:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:29.311 21:22:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.311 21:22:44 -- common/autotest_common.sh@10 -- # set +x 00:17:29.311 21:22:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:29.311 21:22:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.311 21:22:44 -- common/autotest_common.sh@10 -- # set +x 00:17:29.311 [2024-04-24 21:22:44.057175] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.311 21:22:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1199879 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@30 -- # READ_PID=1199880 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1199882 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1199884 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@35 -- # sync 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:29.311 21:22:44 -- nvmf/common.sh@521 -- # config=() 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:29.311 21:22:44 -- nvmf/common.sh@521 -- # local subsystem config 00:17:29.311 21:22:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:29.311 21:22:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:29.311 { 00:17:29.311 "params": { 00:17:29.311 "name": "Nvme$subsystem", 00:17:29.311 "trtype": "$TEST_TRANSPORT", 00:17:29.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.311 "adrfam": "ipv4", 00:17:29.311 "trsvcid": "$NVMF_PORT", 00:17:29.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.311 "hdgst": ${hdgst:-false}, 00:17:29.311 "ddgst": ${ddgst:-false} 00:17:29.311 }, 00:17:29.311 "method": "bdev_nvme_attach_controller" 00:17:29.311 } 00:17:29.311 EOF 00:17:29.311 )") 00:17:29.311 21:22:44 -- nvmf/common.sh@521 -- # config=() 00:17:29.311 21:22:44 -- nvmf/common.sh@521 -- # local subsystem config 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:29.311 21:22:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:29.311 21:22:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:29.311 { 00:17:29.311 "params": { 00:17:29.311 "name": "Nvme$subsystem", 00:17:29.311 "trtype": "$TEST_TRANSPORT", 00:17:29.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.311 "adrfam": "ipv4", 00:17:29.311 "trsvcid": "$NVMF_PORT", 00:17:29.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.311 "hdgst": ${hdgst:-false}, 00:17:29.311 "ddgst": ${ddgst:-false} 00:17:29.311 }, 00:17:29.311 "method": "bdev_nvme_attach_controller" 00:17:29.311 } 00:17:29.311 EOF 00:17:29.311 )") 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:29.311 21:22:44 -- nvmf/common.sh@521 -- # config=() 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:29.311 21:22:44 -- nvmf/common.sh@521 -- # local subsystem config 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:29.311 21:22:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:29.311 21:22:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:29.311 { 00:17:29.311 "params": { 00:17:29.311 "name": "Nvme$subsystem", 00:17:29.311 "trtype": "$TEST_TRANSPORT", 00:17:29.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.311 "adrfam": "ipv4", 00:17:29.311 "trsvcid": "$NVMF_PORT", 00:17:29.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.311 "hdgst": ${hdgst:-false}, 00:17:29.311 "ddgst": ${ddgst:-false} 00:17:29.311 }, 00:17:29.311 "method": "bdev_nvme_attach_controller" 00:17:29.311 } 00:17:29.311 EOF 00:17:29.311 )") 00:17:29.311 21:22:44 -- nvmf/common.sh@521 -- # config=() 00:17:29.311 21:22:44 -- nvmf/common.sh@521 -- # local subsystem config 00:17:29.311 21:22:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:29.311 21:22:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:29.311 { 00:17:29.311 "params": { 00:17:29.311 "name": "Nvme$subsystem", 00:17:29.311 "trtype": "$TEST_TRANSPORT", 00:17:29.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:29.311 "adrfam": "ipv4", 00:17:29.311 "trsvcid": "$NVMF_PORT", 00:17:29.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:29.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:29.311 "hdgst": ${hdgst:-false}, 00:17:29.311 "ddgst": ${ddgst:-false} 00:17:29.311 }, 00:17:29.311 "method": "bdev_nvme_attach_controller" 00:17:29.311 } 00:17:29.311 EOF 00:17:29.311 )") 00:17:29.311 21:22:44 -- nvmf/common.sh@543 -- # cat 00:17:29.311 21:22:44 -- nvmf/common.sh@543 -- # cat 00:17:29.311 21:22:44 -- target/bdev_io_wait.sh@37 -- # wait 1199879 00:17:29.311 21:22:44 -- nvmf/common.sh@543 -- # cat 00:17:29.311 21:22:44 -- nvmf/common.sh@543 -- # cat 00:17:29.311 21:22:44 -- nvmf/common.sh@545 -- # jq . 00:17:29.311 21:22:44 -- nvmf/common.sh@545 -- # jq . 00:17:29.311 21:22:44 -- nvmf/common.sh@545 -- # jq . 00:17:29.311 21:22:44 -- nvmf/common.sh@545 -- # jq . 00:17:29.312 21:22:44 -- nvmf/common.sh@546 -- # IFS=, 00:17:29.312 21:22:44 -- nvmf/common.sh@546 -- # IFS=, 00:17:29.312 21:22:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:29.312 "params": { 00:17:29.312 "name": "Nvme1", 00:17:29.312 "trtype": "tcp", 00:17:29.312 "traddr": "10.0.0.2", 00:17:29.312 "adrfam": "ipv4", 00:17:29.312 "trsvcid": "4420", 00:17:29.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.312 "hdgst": false, 00:17:29.312 "ddgst": false 00:17:29.312 }, 00:17:29.312 "method": "bdev_nvme_attach_controller" 00:17:29.312 }' 00:17:29.312 21:22:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:29.312 "params": { 00:17:29.312 "name": "Nvme1", 00:17:29.312 "trtype": "tcp", 00:17:29.312 "traddr": "10.0.0.2", 00:17:29.312 "adrfam": "ipv4", 00:17:29.312 "trsvcid": "4420", 00:17:29.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.312 "hdgst": false, 00:17:29.312 "ddgst": false 00:17:29.312 }, 00:17:29.312 "method": "bdev_nvme_attach_controller" 00:17:29.312 }' 00:17:29.312 21:22:44 -- nvmf/common.sh@546 -- # IFS=, 00:17:29.312 21:22:44 -- nvmf/common.sh@546 -- # IFS=, 00:17:29.312 21:22:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:29.312 "params": { 00:17:29.312 "name": "Nvme1", 00:17:29.312 "trtype": "tcp", 00:17:29.312 "traddr": "10.0.0.2", 00:17:29.312 "adrfam": "ipv4", 00:17:29.312 "trsvcid": "4420", 00:17:29.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.312 "hdgst": false, 00:17:29.312 "ddgst": false 00:17:29.312 }, 00:17:29.312 "method": "bdev_nvme_attach_controller" 00:17:29.312 }' 00:17:29.312 21:22:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:29.312 "params": { 00:17:29.312 "name": "Nvme1", 00:17:29.312 "trtype": "tcp", 00:17:29.312 "traddr": "10.0.0.2", 00:17:29.312 "adrfam": "ipv4", 00:17:29.312 "trsvcid": "4420", 00:17:29.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.312 "hdgst": false, 00:17:29.312 "ddgst": false 00:17:29.312 }, 00:17:29.312 "method": "bdev_nvme_attach_controller" 00:17:29.312 }' 00:17:29.312 [2024-04-24 21:22:44.118021] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:17:29.312 [2024-04-24 21:22:44.118106] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:29.312 [2024-04-24 21:22:44.137236] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:17:29.312 [2024-04-24 21:22:44.137440] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:29.312 [2024-04-24 21:22:44.147591] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:17:29.312 [2024-04-24 21:22:44.147737] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:29.312 [2024-04-24 21:22:44.149150] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:17:29.312 [2024-04-24 21:22:44.149306] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:29.312 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.312 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.573 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.573 [2024-04-24 21:22:44.312864] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.573 [2024-04-24 21:22:44.353584] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.573 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.573 [2024-04-24 21:22:44.445182] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.573 [2024-04-24 21:22:44.447534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:29.573 [2024-04-24 21:22:44.488045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:29.832 [2024-04-24 21:22:44.550260] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.832 [2024-04-24 21:22:44.579633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:29.832 [2024-04-24 21:22:44.688368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:29.832 Running I/O for 1 seconds... 00:17:29.832 Running I/O for 1 seconds... 00:17:30.092 Running I/O for 1 seconds... 00:17:30.092 Running I/O for 1 seconds... 00:17:31.035 00:17:31.035 Latency(us) 00:17:31.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.035 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:31.035 Nvme1n1 : 1.00 136426.06 532.91 0.00 0.00 934.33 364.33 1129.63 00:17:31.035 =================================================================================================================== 00:17:31.035 Total : 136426.06 532.91 0.00 0.00 934.33 364.33 1129.63 00:17:31.035 00:17:31.035 Latency(us) 00:17:31.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.035 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:31.035 Nvme1n1 : 1.00 13762.94 53.76 0.00 0.00 9270.69 5001.43 14210.96 00:17:31.035 =================================================================================================================== 00:17:31.035 Total : 13762.94 53.76 0.00 0.00 9270.69 5001.43 14210.96 00:17:31.035 00:17:31.035 Latency(us) 00:17:31.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.035 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:31.035 Nvme1n1 : 1.01 10957.01 42.80 0.00 0.00 11640.23 5587.81 24420.78 00:17:31.035 =================================================================================================================== 00:17:31.035 Total : 10957.01 42.80 0.00 0.00 11640.23 5587.81 24420.78 00:17:31.293 00:17:31.293 Latency(us) 00:17:31.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.293 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:31.293 Nvme1n1 : 1.01 11158.42 43.59 0.00 0.00 11434.96 5173.89 21247.46 00:17:31.293 =================================================================================================================== 00:17:31.293 Total : 11158.42 43.59 0.00 0.00 11434.96 5173.89 21247.46 00:17:31.552 21:22:46 -- target/bdev_io_wait.sh@38 -- # wait 1199880 00:17:31.552 21:22:46 -- target/bdev_io_wait.sh@39 -- # wait 1199882 00:17:31.552 21:22:46 -- target/bdev_io_wait.sh@40 -- # wait 1199884 00:17:31.552 21:22:46 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.552 21:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.552 21:22:46 -- common/autotest_common.sh@10 -- # set +x 00:17:31.552 21:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.552 21:22:46 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:31.552 21:22:46 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:31.552 21:22:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:31.552 21:22:46 -- nvmf/common.sh@117 -- # sync 00:17:31.552 21:22:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:31.552 21:22:46 -- nvmf/common.sh@120 -- # set +e 00:17:31.552 21:22:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:31.552 21:22:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:31.552 rmmod nvme_tcp 00:17:31.812 rmmod nvme_fabrics 00:17:31.812 rmmod nvme_keyring 00:17:31.812 21:22:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:31.812 21:22:46 -- nvmf/common.sh@124 -- # set -e 00:17:31.812 21:22:46 -- nvmf/common.sh@125 -- # return 0 00:17:31.812 21:22:46 -- nvmf/common.sh@478 -- # '[' -n 1199600 ']' 00:17:31.812 21:22:46 -- nvmf/common.sh@479 -- # killprocess 1199600 00:17:31.812 21:22:46 -- common/autotest_common.sh@936 -- # '[' -z 1199600 ']' 00:17:31.812 21:22:46 -- common/autotest_common.sh@940 -- # kill -0 1199600 00:17:31.812 21:22:46 -- common/autotest_common.sh@941 -- # uname 00:17:31.812 21:22:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.812 21:22:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1199600 00:17:31.812 21:22:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:31.812 21:22:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:31.812 21:22:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1199600' 00:17:31.812 killing process with pid 1199600 00:17:31.812 21:22:46 -- common/autotest_common.sh@955 -- # kill 1199600 00:17:31.812 21:22:46 -- common/autotest_common.sh@960 -- # wait 1199600 00:17:32.072 21:22:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:32.072 21:22:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:32.072 21:22:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:32.072 21:22:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:32.072 21:22:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:32.072 21:22:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.072 21:22:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.072 21:22:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.610 21:22:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:34.610 00:17:34.610 real 0m11.928s 00:17:34.610 user 0m22.678s 00:17:34.610 sys 0m6.361s 00:17:34.610 21:22:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:34.610 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:17:34.610 ************************************ 00:17:34.610 END TEST nvmf_bdev_io_wait 00:17:34.610 ************************************ 00:17:34.610 21:22:49 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:34.610 21:22:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:34.610 21:22:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:34.610 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:17:34.610 ************************************ 00:17:34.610 START TEST nvmf_queue_depth 00:17:34.610 ************************************ 00:17:34.610 21:22:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:34.610 * Looking for test storage... 00:17:34.610 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:34.610 21:22:49 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.610 21:22:49 -- nvmf/common.sh@7 -- # uname -s 00:17:34.610 21:22:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.610 21:22:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.610 21:22:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.610 21:22:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.610 21:22:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.610 21:22:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.610 21:22:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.610 21:22:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.610 21:22:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.610 21:22:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.610 21:22:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:17:34.610 21:22:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:17:34.610 21:22:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.610 21:22:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.610 21:22:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:34.610 21:22:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.610 21:22:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:34.610 21:22:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.610 21:22:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.610 21:22:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.610 21:22:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.610 21:22:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.610 21:22:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.610 21:22:49 -- paths/export.sh@5 -- # export PATH 00:17:34.610 21:22:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.610 21:22:49 -- nvmf/common.sh@47 -- # : 0 00:17:34.610 21:22:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:34.610 21:22:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:34.610 21:22:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.610 21:22:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.610 21:22:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.610 21:22:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:34.610 21:22:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:34.610 21:22:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:34.610 21:22:49 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:34.610 21:22:49 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:34.610 21:22:49 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:34.610 21:22:49 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:34.611 21:22:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:34.611 21:22:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.611 21:22:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:34.611 21:22:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:34.611 21:22:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:34.611 21:22:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.611 21:22:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.611 21:22:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.611 21:22:49 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:17:34.611 21:22:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:34.611 21:22:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:34.611 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:17:41.193 21:22:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:41.193 21:22:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:41.193 21:22:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:41.193 21:22:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:41.193 21:22:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:41.193 21:22:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:41.193 21:22:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:41.193 21:22:55 -- nvmf/common.sh@295 -- # net_devs=() 00:17:41.193 21:22:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:41.193 21:22:55 -- nvmf/common.sh@296 -- # e810=() 00:17:41.193 21:22:55 -- nvmf/common.sh@296 -- # local -ga e810 00:17:41.193 21:22:55 -- nvmf/common.sh@297 -- # x722=() 00:17:41.193 21:22:55 -- nvmf/common.sh@297 -- # local -ga x722 00:17:41.193 21:22:55 -- nvmf/common.sh@298 -- # mlx=() 00:17:41.193 21:22:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:41.193 21:22:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.193 21:22:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.193 21:22:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.193 21:22:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.193 21:22:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.193 21:22:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.193 21:22:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.193 21:22:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.193 21:22:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.193 21:22:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.193 21:22:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.193 21:22:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:41.193 21:22:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:41.193 21:22:55 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:41.193 21:22:55 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:41.193 21:22:55 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:41.193 21:22:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:41.193 21:22:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.193 21:22:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:41.193 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:41.193 21:22:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.193 21:22:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.193 21:22:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.193 21:22:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.194 21:22:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.194 21:22:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.194 21:22:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:41.194 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:41.194 21:22:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.194 21:22:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.194 21:22:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.194 21:22:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.194 21:22:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.194 21:22:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:41.194 21:22:55 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:41.194 21:22:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.194 21:22:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.194 21:22:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:41.194 21:22:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.194 21:22:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:41.194 Found net devices under 0000:27:00.0: cvl_0_0 00:17:41.194 21:22:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.194 21:22:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.194 21:22:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.194 21:22:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:41.194 21:22:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.194 21:22:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:41.194 Found net devices under 0000:27:00.1: cvl_0_1 00:17:41.194 21:22:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.194 21:22:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:41.194 21:22:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:41.194 21:22:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:41.194 21:22:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:41.194 21:22:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:41.194 21:22:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.194 21:22:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.194 21:22:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.194 21:22:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:41.194 21:22:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.194 21:22:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.194 21:22:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:41.194 21:22:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.194 21:22:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.194 21:22:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:41.194 21:22:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:41.194 21:22:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.194 21:22:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.194 21:22:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.194 21:22:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.194 21:22:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:41.194 21:22:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.194 21:22:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.194 21:22:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.194 21:22:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:41.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:17:41.194 00:17:41.194 --- 10.0.0.2 ping statistics --- 00:17:41.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.194 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:17:41.194 21:22:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:17:41.194 00:17:41.194 --- 10.0.0.1 ping statistics --- 00:17:41.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.194 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:41.194 21:22:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.194 21:22:55 -- nvmf/common.sh@411 -- # return 0 00:17:41.194 21:22:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:41.194 21:22:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.194 21:22:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:41.194 21:22:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:41.194 21:22:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.194 21:22:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:41.194 21:22:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:41.194 21:22:55 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:41.194 21:22:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:41.194 21:22:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:41.194 21:22:55 -- common/autotest_common.sh@10 -- # set +x 00:17:41.194 21:22:55 -- nvmf/common.sh@470 -- # nvmfpid=1204520 00:17:41.194 21:22:55 -- nvmf/common.sh@471 -- # waitforlisten 1204520 00:17:41.194 21:22:55 -- common/autotest_common.sh@817 -- # '[' -z 1204520 ']' 00:17:41.194 21:22:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.194 21:22:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:41.194 21:22:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.194 21:22:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:41.194 21:22:55 -- common/autotest_common.sh@10 -- # set +x 00:17:41.194 21:22:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:41.194 [2024-04-24 21:22:55.485650] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:17:41.194 [2024-04-24 21:22:55.485755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.194 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.194 [2024-04-24 21:22:55.610907] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.194 [2024-04-24 21:22:55.708704] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.194 [2024-04-24 21:22:55.708742] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.194 [2024-04-24 21:22:55.708752] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.194 [2024-04-24 21:22:55.708762] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.194 [2024-04-24 21:22:55.708771] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.194 [2024-04-24 21:22:55.708798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.455 21:22:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:41.455 21:22:56 -- common/autotest_common.sh@850 -- # return 0 00:17:41.455 21:22:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:41.455 21:22:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:41.455 21:22:56 -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 21:22:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.455 21:22:56 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:41.455 21:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.455 21:22:56 -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 [2024-04-24 21:22:56.243608] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.455 21:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.455 21:22:56 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:41.455 21:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.455 21:22:56 -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 Malloc0 00:17:41.455 21:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.455 21:22:56 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:41.455 21:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.455 21:22:56 -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 21:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.455 21:22:56 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:41.455 21:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.455 21:22:56 -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 21:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.455 21:22:56 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.455 21:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.455 21:22:56 -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 [2024-04-24 21:22:56.316813] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.455 21:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.455 21:22:56 -- target/queue_depth.sh@30 -- # bdevperf_pid=1204719 00:17:41.455 21:22:56 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:41.455 21:22:56 -- target/queue_depth.sh@33 -- # waitforlisten 1204719 /var/tmp/bdevperf.sock 00:17:41.455 21:22:56 -- common/autotest_common.sh@817 -- # '[' -z 1204719 ']' 00:17:41.455 21:22:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.455 21:22:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:41.455 21:22:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.455 21:22:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:41.455 21:22:56 -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 21:22:56 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:41.455 [2024-04-24 21:22:56.407128] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:17:41.455 [2024-04-24 21:22:56.407282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204719 ] 00:17:41.716 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.716 [2024-04-24 21:22:56.540630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.716 [2024-04-24 21:22:56.636975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.289 21:22:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:42.289 21:22:57 -- common/autotest_common.sh@850 -- # return 0 00:17:42.289 21:22:57 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:42.289 21:22:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.289 21:22:57 -- common/autotest_common.sh@10 -- # set +x 00:17:42.289 NVMe0n1 00:17:42.289 21:22:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.289 21:22:57 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:42.550 Running I/O for 10 seconds... 00:17:52.596 00:17:52.596 Latency(us) 00:17:52.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.596 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:52.596 Verification LBA range: start 0x0 length 0x4000 00:17:52.596 NVMe0n1 : 10.05 12139.12 47.42 0.00 0.00 84053.81 12969.23 58775.44 00:17:52.596 =================================================================================================================== 00:17:52.596 Total : 12139.12 47.42 0.00 0.00 84053.81 12969.23 58775.44 00:17:52.596 0 00:17:52.596 21:23:07 -- target/queue_depth.sh@39 -- # killprocess 1204719 00:17:52.596 21:23:07 -- common/autotest_common.sh@936 -- # '[' -z 1204719 ']' 00:17:52.596 21:23:07 -- common/autotest_common.sh@940 -- # kill -0 1204719 00:17:52.596 21:23:07 -- common/autotest_common.sh@941 -- # uname 00:17:52.596 21:23:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:52.596 21:23:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1204719 00:17:52.596 21:23:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:52.596 21:23:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:52.596 21:23:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1204719' 00:17:52.596 killing process with pid 1204719 00:17:52.596 21:23:07 -- common/autotest_common.sh@955 -- # kill 1204719 00:17:52.596 Received shutdown signal, test time was about 10.000000 seconds 00:17:52.596 00:17:52.596 Latency(us) 00:17:52.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.596 =================================================================================================================== 00:17:52.596 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:52.596 21:23:07 -- common/autotest_common.sh@960 -- # wait 1204719 00:17:52.855 21:23:07 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:52.855 21:23:07 -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:52.855 21:23:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:52.855 21:23:07 -- nvmf/common.sh@117 -- # sync 00:17:52.855 21:23:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:52.855 21:23:07 -- nvmf/common.sh@120 -- # set +e 00:17:52.855 21:23:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:52.855 21:23:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:52.855 rmmod nvme_tcp 00:17:52.855 rmmod nvme_fabrics 00:17:52.855 rmmod nvme_keyring 00:17:53.113 21:23:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.113 21:23:07 -- nvmf/common.sh@124 -- # set -e 00:17:53.113 21:23:07 -- nvmf/common.sh@125 -- # return 0 00:17:53.113 21:23:07 -- nvmf/common.sh@478 -- # '[' -n 1204520 ']' 00:17:53.113 21:23:07 -- nvmf/common.sh@479 -- # killprocess 1204520 00:17:53.113 21:23:07 -- common/autotest_common.sh@936 -- # '[' -z 1204520 ']' 00:17:53.113 21:23:07 -- common/autotest_common.sh@940 -- # kill -0 1204520 00:17:53.113 21:23:07 -- common/autotest_common.sh@941 -- # uname 00:17:53.113 21:23:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:53.113 21:23:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1204520 00:17:53.113 21:23:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:53.113 21:23:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:53.113 21:23:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1204520' 00:17:53.113 killing process with pid 1204520 00:17:53.113 21:23:07 -- common/autotest_common.sh@955 -- # kill 1204520 00:17:53.113 21:23:07 -- common/autotest_common.sh@960 -- # wait 1204520 00:17:53.680 21:23:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:53.680 21:23:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:53.680 21:23:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:53.680 21:23:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.680 21:23:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.680 21:23:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.680 21:23:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.680 21:23:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.585 21:23:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.585 00:17:55.585 real 0m21.247s 00:17:55.585 user 0m25.527s 00:17:55.585 sys 0m5.786s 00:17:55.585 21:23:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:55.585 21:23:10 -- common/autotest_common.sh@10 -- # set +x 00:17:55.585 ************************************ 00:17:55.585 END TEST nvmf_queue_depth 00:17:55.585 ************************************ 00:17:55.585 21:23:10 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:55.585 21:23:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:55.585 21:23:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:55.585 21:23:10 -- common/autotest_common.sh@10 -- # set +x 00:17:55.845 ************************************ 00:17:55.845 START TEST nvmf_multipath 00:17:55.845 ************************************ 00:17:55.845 21:23:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:55.845 * Looking for test storage... 00:17:55.845 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:55.845 21:23:10 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.845 21:23:10 -- nvmf/common.sh@7 -- # uname -s 00:17:55.845 21:23:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.845 21:23:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.845 21:23:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.845 21:23:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.845 21:23:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.845 21:23:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.845 21:23:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.845 21:23:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.845 21:23:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.845 21:23:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.845 21:23:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:17:55.845 21:23:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:17:55.845 21:23:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.845 21:23:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.845 21:23:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:55.845 21:23:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.845 21:23:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:55.845 21:23:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.845 21:23:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.845 21:23:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.845 21:23:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.845 21:23:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.845 21:23:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.845 21:23:10 -- paths/export.sh@5 -- # export PATH 00:17:55.845 21:23:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.845 21:23:10 -- nvmf/common.sh@47 -- # : 0 00:17:55.845 21:23:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.845 21:23:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.845 21:23:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.845 21:23:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.845 21:23:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.845 21:23:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.845 21:23:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.845 21:23:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.845 21:23:10 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.845 21:23:10 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.845 21:23:10 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:55.845 21:23:10 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:55.845 21:23:10 -- target/multipath.sh@43 -- # nvmftestinit 00:17:55.846 21:23:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:55.846 21:23:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.846 21:23:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:55.846 21:23:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:55.846 21:23:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:55.846 21:23:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.846 21:23:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.846 21:23:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.846 21:23:10 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:17:55.846 21:23:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:55.846 21:23:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.846 21:23:10 -- common/autotest_common.sh@10 -- # set +x 00:18:01.148 21:23:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:01.148 21:23:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:01.148 21:23:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:01.148 21:23:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:01.148 21:23:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:01.148 21:23:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:01.148 21:23:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:01.148 21:23:15 -- nvmf/common.sh@295 -- # net_devs=() 00:18:01.148 21:23:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:01.148 21:23:15 -- nvmf/common.sh@296 -- # e810=() 00:18:01.148 21:23:15 -- nvmf/common.sh@296 -- # local -ga e810 00:18:01.148 21:23:15 -- nvmf/common.sh@297 -- # x722=() 00:18:01.148 21:23:15 -- nvmf/common.sh@297 -- # local -ga x722 00:18:01.148 21:23:15 -- nvmf/common.sh@298 -- # mlx=() 00:18:01.148 21:23:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:01.148 21:23:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.148 21:23:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.148 21:23:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.148 21:23:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.148 21:23:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.148 21:23:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.148 21:23:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.148 21:23:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.148 21:23:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.148 21:23:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.148 21:23:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.148 21:23:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:01.148 21:23:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:01.148 21:23:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.148 21:23:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:01.148 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:01.148 21:23:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.148 21:23:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:01.148 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:01.148 21:23:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:01.148 21:23:15 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:01.148 21:23:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.148 21:23:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.148 21:23:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:01.148 21:23:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.148 21:23:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:01.148 Found net devices under 0000:27:00.0: cvl_0_0 00:18:01.148 21:23:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.148 21:23:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.148 21:23:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.148 21:23:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:01.149 21:23:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.149 21:23:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:01.149 Found net devices under 0000:27:00.1: cvl_0_1 00:18:01.149 21:23:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.149 21:23:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:01.149 21:23:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:01.149 21:23:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:01.149 21:23:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:01.149 21:23:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:01.149 21:23:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.149 21:23:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.149 21:23:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:01.149 21:23:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:01.149 21:23:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:01.149 21:23:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:01.149 21:23:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:01.149 21:23:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:01.149 21:23:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.149 21:23:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:01.149 21:23:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:01.149 21:23:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:01.149 21:23:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:01.149 21:23:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:01.149 21:23:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:01.149 21:23:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:01.149 21:23:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.149 21:23:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.149 21:23:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.410 21:23:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:01.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:18:01.410 00:18:01.410 --- 10.0.0.2 ping statistics --- 00:18:01.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.410 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:18:01.410 21:23:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:18:01.410 00:18:01.410 --- 10.0.0.1 ping statistics --- 00:18:01.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.410 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:18:01.410 21:23:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.410 21:23:16 -- nvmf/common.sh@411 -- # return 0 00:18:01.410 21:23:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:01.410 21:23:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.410 21:23:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:01.410 21:23:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:01.410 21:23:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.410 21:23:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:01.410 21:23:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:01.410 21:23:16 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:01.410 21:23:16 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:01.410 only one NIC for nvmf test 00:18:01.410 21:23:16 -- target/multipath.sh@47 -- # nvmftestfini 00:18:01.410 21:23:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:01.410 21:23:16 -- nvmf/common.sh@117 -- # sync 00:18:01.410 21:23:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:01.410 21:23:16 -- nvmf/common.sh@120 -- # set +e 00:18:01.410 21:23:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:01.410 21:23:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:01.410 rmmod nvme_tcp 00:18:01.410 rmmod nvme_fabrics 00:18:01.410 rmmod nvme_keyring 00:18:01.410 21:23:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:01.410 21:23:16 -- nvmf/common.sh@124 -- # set -e 00:18:01.410 21:23:16 -- nvmf/common.sh@125 -- # return 0 00:18:01.410 21:23:16 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:01.410 21:23:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:01.410 21:23:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:01.410 21:23:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:01.410 21:23:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:01.410 21:23:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:01.410 21:23:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.410 21:23:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.410 21:23:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.946 21:23:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:03.946 21:23:18 -- target/multipath.sh@48 -- # exit 0 00:18:03.946 21:23:18 -- target/multipath.sh@1 -- # nvmftestfini 00:18:03.946 21:23:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:03.946 21:23:18 -- nvmf/common.sh@117 -- # sync 00:18:03.946 21:23:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:03.946 21:23:18 -- nvmf/common.sh@120 -- # set +e 00:18:03.946 21:23:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:03.946 21:23:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:03.946 21:23:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:03.946 21:23:18 -- nvmf/common.sh@124 -- # set -e 00:18:03.946 21:23:18 -- nvmf/common.sh@125 -- # return 0 00:18:03.946 21:23:18 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:03.946 21:23:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:03.946 21:23:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:03.946 21:23:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:03.946 21:23:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:03.946 21:23:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:03.946 21:23:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.946 21:23:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.946 21:23:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.946 21:23:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:03.946 00:18:03.946 real 0m7.735s 00:18:03.946 user 0m1.616s 00:18:03.946 sys 0m4.026s 00:18:03.946 21:23:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:03.946 21:23:18 -- common/autotest_common.sh@10 -- # set +x 00:18:03.946 ************************************ 00:18:03.946 END TEST nvmf_multipath 00:18:03.946 ************************************ 00:18:03.946 21:23:18 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:03.946 21:23:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:03.946 21:23:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:03.946 21:23:18 -- common/autotest_common.sh@10 -- # set +x 00:18:03.946 ************************************ 00:18:03.946 START TEST nvmf_zcopy 00:18:03.946 ************************************ 00:18:03.946 21:23:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:03.946 * Looking for test storage... 00:18:03.946 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:03.946 21:23:18 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.946 21:23:18 -- nvmf/common.sh@7 -- # uname -s 00:18:03.946 21:23:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.946 21:23:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.946 21:23:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.946 21:23:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.946 21:23:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.946 21:23:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.946 21:23:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.946 21:23:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.946 21:23:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.946 21:23:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.946 21:23:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:18:03.946 21:23:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:18:03.946 21:23:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.946 21:23:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.946 21:23:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:03.946 21:23:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.946 21:23:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:03.946 21:23:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.946 21:23:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.946 21:23:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.946 21:23:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.946 21:23:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.946 21:23:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.946 21:23:18 -- paths/export.sh@5 -- # export PATH 00:18:03.946 21:23:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.946 21:23:18 -- nvmf/common.sh@47 -- # : 0 00:18:03.946 21:23:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:03.946 21:23:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:03.946 21:23:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.946 21:23:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.946 21:23:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.946 21:23:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:03.946 21:23:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:03.946 21:23:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:03.946 21:23:18 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:03.946 21:23:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:03.946 21:23:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.946 21:23:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:03.946 21:23:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:03.946 21:23:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:03.946 21:23:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.946 21:23:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.946 21:23:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.946 21:23:18 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:18:03.946 21:23:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:03.946 21:23:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:03.946 21:23:18 -- common/autotest_common.sh@10 -- # set +x 00:18:09.218 21:23:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:09.218 21:23:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:09.218 21:23:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:09.218 21:23:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:09.218 21:23:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:09.218 21:23:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:09.218 21:23:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:09.218 21:23:23 -- nvmf/common.sh@295 -- # net_devs=() 00:18:09.218 21:23:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:09.218 21:23:23 -- nvmf/common.sh@296 -- # e810=() 00:18:09.218 21:23:23 -- nvmf/common.sh@296 -- # local -ga e810 00:18:09.218 21:23:23 -- nvmf/common.sh@297 -- # x722=() 00:18:09.218 21:23:23 -- nvmf/common.sh@297 -- # local -ga x722 00:18:09.218 21:23:23 -- nvmf/common.sh@298 -- # mlx=() 00:18:09.218 21:23:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:09.218 21:23:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.218 21:23:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.218 21:23:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.218 21:23:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.218 21:23:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.218 21:23:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.218 21:23:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.218 21:23:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.218 21:23:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.218 21:23:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.218 21:23:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.218 21:23:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:09.218 21:23:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:09.218 21:23:23 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:09.218 21:23:23 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:09.218 21:23:23 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:09.219 21:23:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:09.219 21:23:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:09.219 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:09.219 21:23:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:09.219 21:23:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:09.219 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:09.219 21:23:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:09.219 21:23:23 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:09.219 21:23:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.219 21:23:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:09.219 21:23:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.219 21:23:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:09.219 Found net devices under 0000:27:00.0: cvl_0_0 00:18:09.219 21:23:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.219 21:23:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:09.219 21:23:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.219 21:23:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:09.219 21:23:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.219 21:23:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:09.219 Found net devices under 0000:27:00.1: cvl_0_1 00:18:09.219 21:23:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.219 21:23:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:09.219 21:23:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:09.219 21:23:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:09.219 21:23:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:09.219 21:23:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.219 21:23:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.219 21:23:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.219 21:23:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:09.219 21:23:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.219 21:23:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.219 21:23:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:09.219 21:23:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.219 21:23:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.219 21:23:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:09.219 21:23:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:09.219 21:23:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.219 21:23:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:09.219 21:23:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:09.219 21:23:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:09.219 21:23:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:09.219 21:23:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:09.219 21:23:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:09.219 21:23:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:09.219 21:23:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:09.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:18:09.219 00:18:09.219 --- 10.0.0.2 ping statistics --- 00:18:09.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.219 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:18:09.219 21:23:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:09.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:18:09.219 00:18:09.219 --- 10.0.0.1 ping statistics --- 00:18:09.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.219 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:18:09.219 21:23:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.219 21:23:24 -- nvmf/common.sh@411 -- # return 0 00:18:09.219 21:23:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:09.219 21:23:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.219 21:23:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:09.219 21:23:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:09.219 21:23:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.219 21:23:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:09.219 21:23:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:09.219 21:23:24 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:09.219 21:23:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:09.219 21:23:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:09.219 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:18:09.219 21:23:24 -- nvmf/common.sh@470 -- # nvmfpid=1214890 00:18:09.219 21:23:24 -- nvmf/common.sh@471 -- # waitforlisten 1214890 00:18:09.219 21:23:24 -- common/autotest_common.sh@817 -- # '[' -z 1214890 ']' 00:18:09.219 21:23:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.219 21:23:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:09.219 21:23:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.219 21:23:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:09.219 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:18:09.219 21:23:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:09.219 [2024-04-24 21:23:24.160316] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:18:09.219 [2024-04-24 21:23:24.160417] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.478 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.478 [2024-04-24 21:23:24.278316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.478 [2024-04-24 21:23:24.373823] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.478 [2024-04-24 21:23:24.373859] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.478 [2024-04-24 21:23:24.373869] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.478 [2024-04-24 21:23:24.373877] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.478 [2024-04-24 21:23:24.373884] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.478 [2024-04-24 21:23:24.373911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.049 21:23:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:10.049 21:23:24 -- common/autotest_common.sh@850 -- # return 0 00:18:10.049 21:23:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:10.049 21:23:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:10.049 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:18:10.049 21:23:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.049 21:23:24 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:10.049 21:23:24 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:10.049 21:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.049 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:18:10.049 [2024-04-24 21:23:24.887157] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.049 21:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.049 21:23:24 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:10.049 21:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.049 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:18:10.049 21:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.049 21:23:24 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.049 21:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.049 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:18:10.049 [2024-04-24 21:23:24.903349] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.049 21:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.049 21:23:24 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:10.049 21:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.049 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:18:10.049 21:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.049 21:23:24 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:10.049 21:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.049 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:18:10.049 malloc0 00:18:10.049 21:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.049 21:23:24 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:10.049 21:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.049 21:23:24 -- common/autotest_common.sh@10 -- # set +x 00:18:10.049 21:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.049 21:23:24 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:10.049 21:23:24 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:10.049 21:23:24 -- nvmf/common.sh@521 -- # config=() 00:18:10.049 21:23:24 -- nvmf/common.sh@521 -- # local subsystem config 00:18:10.049 21:23:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:10.049 21:23:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:10.049 { 00:18:10.049 "params": { 00:18:10.049 "name": "Nvme$subsystem", 00:18:10.049 "trtype": "$TEST_TRANSPORT", 00:18:10.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:10.049 "adrfam": "ipv4", 00:18:10.049 "trsvcid": "$NVMF_PORT", 00:18:10.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:10.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:10.049 "hdgst": ${hdgst:-false}, 00:18:10.049 "ddgst": ${ddgst:-false} 00:18:10.049 }, 00:18:10.049 "method": "bdev_nvme_attach_controller" 00:18:10.049 } 00:18:10.049 EOF 00:18:10.049 )") 00:18:10.049 21:23:24 -- nvmf/common.sh@543 -- # cat 00:18:10.049 21:23:24 -- nvmf/common.sh@545 -- # jq . 00:18:10.049 21:23:24 -- nvmf/common.sh@546 -- # IFS=, 00:18:10.049 21:23:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:10.049 "params": { 00:18:10.049 "name": "Nvme1", 00:18:10.049 "trtype": "tcp", 00:18:10.049 "traddr": "10.0.0.2", 00:18:10.049 "adrfam": "ipv4", 00:18:10.049 "trsvcid": "4420", 00:18:10.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.049 "hdgst": false, 00:18:10.049 "ddgst": false 00:18:10.049 }, 00:18:10.049 "method": "bdev_nvme_attach_controller" 00:18:10.049 }' 00:18:10.309 [2024-04-24 21:23:25.036443] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:18:10.309 [2024-04-24 21:23:25.036550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215143 ] 00:18:10.309 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.309 [2024-04-24 21:23:25.152233] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.309 [2024-04-24 21:23:25.243453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.876 Running I/O for 10 seconds... 00:18:20.872 00:18:20.872 Latency(us) 00:18:20.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.872 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:20.872 Verification LBA range: start 0x0 length 0x1000 00:18:20.872 Nvme1n1 : 10.01 8721.87 68.14 0.00 0.00 14637.64 2000.57 24006.87 00:18:20.872 =================================================================================================================== 00:18:20.872 Total : 8721.87 68.14 0.00 0.00 14637.64 2000.57 24006.87 00:18:21.131 21:23:36 -- target/zcopy.sh@39 -- # perfpid=1217292 00:18:21.131 21:23:36 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:21.131 21:23:36 -- common/autotest_common.sh@10 -- # set +x 00:18:21.131 21:23:36 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:21.131 21:23:36 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:21.131 21:23:36 -- nvmf/common.sh@521 -- # config=() 00:18:21.131 21:23:36 -- nvmf/common.sh@521 -- # local subsystem config 00:18:21.131 21:23:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:21.131 21:23:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:21.131 { 00:18:21.131 "params": { 00:18:21.131 "name": "Nvme$subsystem", 00:18:21.131 "trtype": "$TEST_TRANSPORT", 00:18:21.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:21.131 "adrfam": "ipv4", 00:18:21.131 "trsvcid": "$NVMF_PORT", 00:18:21.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:21.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:21.131 "hdgst": ${hdgst:-false}, 00:18:21.131 "ddgst": ${ddgst:-false} 00:18:21.131 }, 00:18:21.131 "method": "bdev_nvme_attach_controller" 00:18:21.131 } 00:18:21.131 EOF 00:18:21.131 )") 00:18:21.131 [2024-04-24 21:23:36.006015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.131 [2024-04-24 21:23:36.006060] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.131 21:23:36 -- nvmf/common.sh@543 -- # cat 00:18:21.131 21:23:36 -- nvmf/common.sh@545 -- # jq . 00:18:21.131 21:23:36 -- nvmf/common.sh@546 -- # IFS=, 00:18:21.131 [2024-04-24 21:23:36.013978] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.131 [2024-04-24 21:23:36.013998] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.131 21:23:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:21.131 "params": { 00:18:21.131 "name": "Nvme1", 00:18:21.131 "trtype": "tcp", 00:18:21.131 "traddr": "10.0.0.2", 00:18:21.131 "adrfam": "ipv4", 00:18:21.131 "trsvcid": "4420", 00:18:21.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.131 "hdgst": false, 00:18:21.131 "ddgst": false 00:18:21.131 }, 00:18:21.131 "method": "bdev_nvme_attach_controller" 00:18:21.131 }' 00:18:21.131 [2024-04-24 21:23:36.021950] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.131 [2024-04-24 21:23:36.021965] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.131 [2024-04-24 21:23:36.029962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.131 [2024-04-24 21:23:36.029979] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.131 [2024-04-24 21:23:36.037959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.131 [2024-04-24 21:23:36.037974] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.131 [2024-04-24 21:23:36.045948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.131 [2024-04-24 21:23:36.045962] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.131 [2024-04-24 21:23:36.053959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.131 [2024-04-24 21:23:36.053973] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.131 [2024-04-24 21:23:36.061960] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.131 [2024-04-24 21:23:36.061974] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.131 [2024-04-24 21:23:36.069951] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.131 [2024-04-24 21:23:36.069965] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.131 [2024-04-24 21:23:36.070123] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:18:21.131 [2024-04-24 21:23:36.070231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217292 ] 00:18:21.131 [2024-04-24 21:23:36.077962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.131 [2024-04-24 21:23:36.077977] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.131 [2024-04-24 21:23:36.085958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.131 [2024-04-24 21:23:36.085971] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.131 [2024-04-24 21:23:36.093966] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.131 [2024-04-24 21:23:36.093981] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.101968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.101983] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.109966] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.109979] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.117973] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.117986] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.125976] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.125990] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.133967] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.133980] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.389 [2024-04-24 21:23:36.141980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.141993] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.149987] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.150002] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.157986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.158000] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.165984] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.166001] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.173981] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.173995] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.179747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.389 [2024-04-24 21:23:36.181997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.182011] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.189999] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.190012] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.197989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.198003] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.206001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.206015] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.214005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.214018] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.222012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.222025] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.230006] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.230020] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.238002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.238016] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.246016] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.246029] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.254012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.254026] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.262005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.262019] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.269031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.389 [2024-04-24 21:23:36.270014] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.270029] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.278010] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.278023] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.286018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.286031] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.294021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.294034] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.302013] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.302026] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.310030] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.310047] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.318029] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.318043] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.326021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.326035] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.334031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.334043] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.342037] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.342050] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.389 [2024-04-24 21:23:36.350031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.389 [2024-04-24 21:23:36.350045] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.358036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.358055] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.366030] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.366044] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.374038] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.374052] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.382043] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.382057] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.390034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.390048] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.398048] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.398060] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.406042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.406055] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.414050] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.414063] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.422074] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.422097] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.430059] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.430077] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.438083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.438104] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.446080] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.446100] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.454067] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.454081] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.462077] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.462096] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.470072] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.470086] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.478077] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.478090] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.486081] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.486096] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.494076] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.494094] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.502091] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.502109] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.647 [2024-04-24 21:23:36.510086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.647 [2024-04-24 21:23:36.510100] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.648 [2024-04-24 21:23:36.518079] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.648 [2024-04-24 21:23:36.518094] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.648 [2024-04-24 21:23:36.526090] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.648 [2024-04-24 21:23:36.526105] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.648 [2024-04-24 21:23:36.534097] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.648 [2024-04-24 21:23:36.534112] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.648 [2024-04-24 21:23:36.542112] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.648 [2024-04-24 21:23:36.542132] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.648 [2024-04-24 21:23:36.550101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.648 [2024-04-24 21:23:36.550116] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.648 [2024-04-24 21:23:36.558093] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.648 [2024-04-24 21:23:36.558107] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.648 [2024-04-24 21:23:36.566106] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.648 [2024-04-24 21:23:36.566120] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.648 [2024-04-24 21:23:36.574109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.648 [2024-04-24 21:23:36.574123] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.648 [2024-04-24 21:23:36.582107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.648 [2024-04-24 21:23:36.582123] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.648 [2024-04-24 21:23:36.590126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.648 [2024-04-24 21:23:36.590144] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.648 [2024-04-24 21:23:36.598111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.648 [2024-04-24 21:23:36.598126] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.648 [2024-04-24 21:23:36.606122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.648 [2024-04-24 21:23:36.606137] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.614126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.614142] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.622123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.622136] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.630143] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.630158] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.678532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.678561] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 Running I/O for 5 seconds... 00:18:21.907 [2024-04-24 21:23:36.690161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.690181] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.700887] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.700913] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.709512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.709537] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.719218] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.719245] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.728138] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.728165] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.737496] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.737523] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.746910] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.746936] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.756132] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.756158] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.765592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.765617] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.774768] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.774794] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.907 [2024-04-24 21:23:36.784747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.907 [2024-04-24 21:23:36.784772] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.908 [2024-04-24 21:23:36.794153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.908 [2024-04-24 21:23:36.794181] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.908 [2024-04-24 21:23:36.803528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.908 [2024-04-24 21:23:36.803554] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.908 [2024-04-24 21:23:36.812256] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.908 [2024-04-24 21:23:36.812285] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.908 [2024-04-24 21:23:36.822209] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.908 [2024-04-24 21:23:36.822235] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.908 [2024-04-24 21:23:36.831905] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.908 [2024-04-24 21:23:36.831931] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.908 [2024-04-24 21:23:36.841969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.908 [2024-04-24 21:23:36.841995] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.908 [2024-04-24 21:23:36.851512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.908 [2024-04-24 21:23:36.851537] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.908 [2024-04-24 21:23:36.861334] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.908 [2024-04-24 21:23:36.861357] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.908 [2024-04-24 21:23:36.870746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.908 [2024-04-24 21:23:36.870769] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:36.879485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:36.879512] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:36.888833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:36.888858] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:36.898037] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:36.898064] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:36.908025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:36.908051] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:36.916851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:36.916879] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:36.926137] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:36.926165] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:36.934392] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:36.934418] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:36.944579] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:36.944605] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:36.954093] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:36.954119] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:36.963993] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:36.964018] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:36.973447] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:36.973475] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:36.983197] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:36.983223] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:36.991812] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:36.991839] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.001196] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.001222] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.010996] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.011024] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.021262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.021297] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.029333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.029359] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.039557] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.039583] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.049008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.049035] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.057685] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.057710] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.067472] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.067497] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.076197] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.076224] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.085454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.085479] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.095374] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.095401] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.104736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.104762] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.114425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.114449] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.167 [2024-04-24 21:23:37.123092] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.167 [2024-04-24 21:23:37.123117] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.426 [2024-04-24 21:23:37.132239] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.426 [2024-04-24 21:23:37.132279] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.426 [2024-04-24 21:23:37.142060] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.426 [2024-04-24 21:23:37.142086] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.426 [2024-04-24 21:23:37.151537] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.426 [2024-04-24 21:23:37.151563] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.426 [2024-04-24 21:23:37.160627] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.426 [2024-04-24 21:23:37.160652] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.426 [2024-04-24 21:23:37.169957] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.426 [2024-04-24 21:23:37.169984] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.426 [2024-04-24 21:23:37.179977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.426 [2024-04-24 21:23:37.180007] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.426 [2024-04-24 21:23:37.190230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.190257] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.199608] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.199632] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.209013] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.209041] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.218788] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.218814] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.227792] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.227819] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.237102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.237129] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.246432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.246458] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.255864] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.255890] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.265700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.265725] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.274285] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.274313] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.284073] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.284100] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.293545] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.293573] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.303007] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.303033] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.313031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.313058] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.321895] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.321922] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.331094] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.331123] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.340310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.340335] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.350186] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.350213] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.358867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.358896] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.368305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.368329] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.377482] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.377508] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.427 [2024-04-24 21:23:37.386834] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.427 [2024-04-24 21:23:37.386860] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.395917] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.395945] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.405746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.405772] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.414917] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.414943] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.424623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.424650] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.433314] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.433340] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.443130] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.443157] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.451848] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.451874] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.461019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.461044] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.470273] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.470300] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.479541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.479568] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.488291] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.488319] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.497643] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.497669] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.506977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.507004] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.516834] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.516859] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.526307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.526333] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.535392] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.535422] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.545276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.545301] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.555057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.555082] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.564259] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.564289] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.573986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.574011] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.583220] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.583247] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.592468] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.592493] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.601836] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.601863] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.610596] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.610622] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.619929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.619956] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.629138] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.629165] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.638997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.639024] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.687 [2024-04-24 21:23:37.648373] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.687 [2024-04-24 21:23:37.648400] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.658134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.658161] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.667517] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.667540] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.676748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.676771] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.686099] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.686125] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.695928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.695955] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.705259] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.705288] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.714525] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.714554] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.723889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.723916] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.733015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.733041] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.742594] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.742621] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.751401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.751426] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.760503] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.760529] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.769593] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.769618] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.779421] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.779447] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.788284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.788309] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.797355] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.797379] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.807071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.807096] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.816314] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.816339] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.826255] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.826287] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.836218] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.836241] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.845645] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.845673] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.854907] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.854933] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.864019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.864045] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.873758] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.873785] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.882990] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.883016] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.892133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.892156] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.901843] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.901870] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.946 [2024-04-24 21:23:37.911327] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.946 [2024-04-24 21:23:37.911353] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:37.920677] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:37.920701] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:37.929834] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:37.929860] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:37.939476] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:37.939501] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:37.948825] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:37.948852] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:37.958042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:37.958067] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:37.967380] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:37.967407] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:37.976136] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:37.976161] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:37.985579] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:37.985606] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:37.995447] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:37.995472] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.004953] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:38.004979] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.014152] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:38.014177] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.023279] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:38.023305] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.033113] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:38.033139] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.042404] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:38.042429] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.051649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:38.051676] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.060735] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:38.060759] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.069950] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:38.069976] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.079081] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:38.079106] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.088750] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:38.088775] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.097418] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:38.097443] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.107138] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:38.107165] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.116456] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.205 [2024-04-24 21:23:38.116481] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.205 [2024-04-24 21:23:38.125817] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.206 [2024-04-24 21:23:38.125841] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.206 [2024-04-24 21:23:38.135139] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.206 [2024-04-24 21:23:38.135164] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.206 [2024-04-24 21:23:38.144989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.206 [2024-04-24 21:23:38.145014] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.206 [2024-04-24 21:23:38.154371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.206 [2024-04-24 21:23:38.154396] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.206 [2024-04-24 21:23:38.163804] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.206 [2024-04-24 21:23:38.163827] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.465 [2024-04-24 21:23:38.172705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.465 [2024-04-24 21:23:38.172733] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.465 [2024-04-24 21:23:38.182544] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.465 [2024-04-24 21:23:38.182568] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.465 [2024-04-24 21:23:38.191903] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.465 [2024-04-24 21:23:38.191930] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.465 [2024-04-24 21:23:38.201648] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.201673] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.211012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.211036] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.220698] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.220722] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.230356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.230380] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.239967] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.239991] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.249021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.249047] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.258330] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.258354] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.268178] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.268203] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.277797] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.277829] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.285961] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.285987] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.296309] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.296336] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.306242] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.306276] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.316059] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.316085] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.325507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.325536] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.334182] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.334211] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.344119] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.344148] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.353035] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.353062] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.362378] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.362407] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.371751] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.371780] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.380656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.380687] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.389975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.390004] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.399640] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.399667] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.408717] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.408744] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.418610] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.418635] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.466 [2024-04-24 21:23:38.428096] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.466 [2024-04-24 21:23:38.428124] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.436755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.436781] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.446071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.446097] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.455769] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.455794] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.464407] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.464434] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.474223] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.474248] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.484026] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.484051] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.493410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.493436] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.503413] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.503437] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.512729] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.512753] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.522609] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.522636] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.531941] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.531967] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.541868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.541894] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.551701] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.551727] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.561055] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.561081] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.570238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.570263] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.579592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.579615] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.589494] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.589521] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.598789] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.598821] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.608695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.608723] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.619018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.619042] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.628177] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.628202] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.637970] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.637997] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.647376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.647401] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.656725] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.656752] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.666053] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.666079] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.674740] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.725 [2024-04-24 21:23:38.674768] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.725 [2024-04-24 21:23:38.683517] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.726 [2024-04-24 21:23:38.683544] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.693129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.693155] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.702442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.702468] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.711722] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.711747] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.721567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.721593] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.730196] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.730221] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.739655] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.739682] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.749148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.749175] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.758473] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.758499] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.767518] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.767544] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.777306] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.777336] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.786005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.786033] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.795798] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.795824] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.805118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.805145] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.813926] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.813951] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.823824] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.823851] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.832672] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.832698] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.841943] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.841968] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.851439] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.851465] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.860699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.860728] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.870508] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.870534] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.879789] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.879817] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.889120] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.889145] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.898477] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.898504] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.907731] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.907757] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.917085] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.917112] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.927012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.927039] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.936976] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.937002] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.984 [2024-04-24 21:23:38.945920] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.984 [2024-04-24 21:23:38.945946] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:38.955313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:38.955345] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:38.964561] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:38.964586] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:38.974073] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:38.974099] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:38.983593] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:38.983622] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:38.992882] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:38.992908] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.002333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.002362] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.010980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.011006] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.020182] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.020210] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.029380] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.029404] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.038768] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.038796] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.048075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.048101] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.058002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.058030] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.067425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.067452] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.076764] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.076790] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.086115] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.086142] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.095930] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.095956] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.104614] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.104640] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.113904] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.113932] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.123220] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.123246] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.133036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.133068] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.141724] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.141748] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.150941] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.150968] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.160881] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.160907] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.169605] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.169630] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.179396] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.179422] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.189466] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.189493] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.198964] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.198990] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.245 [2024-04-24 21:23:39.207626] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.245 [2024-04-24 21:23:39.207653] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.216867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.216894] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.226324] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.226349] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.235634] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.235660] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.244940] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.244964] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.254174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.254199] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.263700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.263726] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.272874] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.272899] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.282135] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.282159] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.291119] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.291143] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.300296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.300320] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.309941] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.309969] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.318581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.318606] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.327234] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.327260] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.336899] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.336924] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.345567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.345591] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.354839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.504 [2024-04-24 21:23:39.354866] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.504 [2024-04-24 21:23:39.364301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.505 [2024-04-24 21:23:39.364330] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.505 [2024-04-24 21:23:39.374376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.505 [2024-04-24 21:23:39.374404] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.505 [2024-04-24 21:23:39.383794] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.505 [2024-04-24 21:23:39.383820] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.505 [2024-04-24 21:23:39.392916] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.505 [2024-04-24 21:23:39.392942] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.505 [2024-04-24 21:23:39.402306] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.505 [2024-04-24 21:23:39.402330] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.505 [2024-04-24 21:23:39.411624] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.505 [2024-04-24 21:23:39.411649] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.505 [2024-04-24 21:23:39.421403] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.505 [2024-04-24 21:23:39.421427] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.505 [2024-04-24 21:23:39.430777] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.505 [2024-04-24 21:23:39.430802] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.505 [2024-04-24 21:23:39.439657] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.505 [2024-04-24 21:23:39.439681] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.505 [2024-04-24 21:23:39.448900] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.505 [2024-04-24 21:23:39.448925] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.505 [2024-04-24 21:23:39.458709] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.505 [2024-04-24 21:23:39.458735] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.505 [2024-04-24 21:23:39.467376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.505 [2024-04-24 21:23:39.467400] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.476799] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.476826] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.485855] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.485880] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.495657] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.495684] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.505447] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.505472] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.514045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.514070] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.523240] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.523265] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.532988] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.533014] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.542261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.542289] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.552065] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.552092] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.561334] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.561360] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.570741] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.570766] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.580315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.580342] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.589506] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.589531] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.599182] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.599207] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.608944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.608968] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.618264] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.618297] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.628159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.628186] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.636665] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.636688] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.645860] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.645887] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.655576] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.655601] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.665314] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.665338] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.674693] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.674717] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.683922] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.683949] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.693210] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.693235] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.702631] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.702657] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.711759] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.711784] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.764 [2024-04-24 21:23:39.721515] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.764 [2024-04-24 21:23:39.721542] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.730177] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.730203] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.738898] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.738923] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.748301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.748327] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.758059] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.758084] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.766797] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.766823] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.775530] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.775555] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.784799] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.784825] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.793909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.793935] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.803774] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.803799] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.813131] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.813156] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.822362] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.822388] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.831683] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.831708] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.841553] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.841581] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.850213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.850238] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.859406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.859431] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.869111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.869138] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.879039] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.879064] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.888238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.888264] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.022 [2024-04-24 21:23:39.897420] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.022 [2024-04-24 21:23:39.897444] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.023 [2024-04-24 21:23:39.906652] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.023 [2024-04-24 21:23:39.906677] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.023 [2024-04-24 21:23:39.915948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.023 [2024-04-24 21:23:39.915974] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.023 [2024-04-24 21:23:39.925398] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.023 [2024-04-24 21:23:39.925422] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.023 [2024-04-24 21:23:39.934567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.023 [2024-04-24 21:23:39.934591] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.023 [2024-04-24 21:23:39.943715] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.023 [2024-04-24 21:23:39.943739] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.023 [2024-04-24 21:23:39.953620] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.023 [2024-04-24 21:23:39.953646] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.023 [2024-04-24 21:23:39.962929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.023 [2024-04-24 21:23:39.962953] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.023 [2024-04-24 21:23:39.972257] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.023 [2024-04-24 21:23:39.972288] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.023 [2024-04-24 21:23:39.981580] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.023 [2024-04-24 21:23:39.981603] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.281 [2024-04-24 21:23:39.990378] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.281 [2024-04-24 21:23:39.990404] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.281 [2024-04-24 21:23:40.000255] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.281 [2024-04-24 21:23:40.000291] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.281 [2024-04-24 21:23:40.010590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.281 [2024-04-24 21:23:40.010633] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.281 [2024-04-24 21:23:40.020475] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.281 [2024-04-24 21:23:40.020508] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.281 [2024-04-24 21:23:40.029124] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.281 [2024-04-24 21:23:40.029152] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.281 [2024-04-24 21:23:40.039662] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.281 [2024-04-24 21:23:40.039698] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.281 [2024-04-24 21:23:40.051117] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.281 [2024-04-24 21:23:40.051145] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.281 [2024-04-24 21:23:40.060137] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.060163] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.069455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.069481] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.078680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.078705] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.087006] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.087040] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.097377] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.097404] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.106352] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.106379] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.116137] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.116162] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.124956] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.124983] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.134849] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.134875] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.143874] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.143899] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.153534] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.153560] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.163401] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.163428] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.172828] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.172854] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.182123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.182149] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.190705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.190734] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.205116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.205144] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.213896] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.213923] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.223595] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.223620] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.231767] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.231797] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.282 [2024-04-24 21:23:40.242162] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.282 [2024-04-24 21:23:40.242187] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.251720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.251749] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.260942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.260967] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.270393] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.270419] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.280272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.280298] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.289493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.289524] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.299345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.299372] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.308767] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.308796] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.318368] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.318395] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.327132] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.327160] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.336987] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.337012] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.346281] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.346309] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.355601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.355626] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.365176] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.365202] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.374345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.374377] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.383656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.383681] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.392975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.393002] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.402382] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.402409] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.411607] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.411632] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.421028] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.421056] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.430293] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.430319] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.440103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.440128] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.449579] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.449607] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.458834] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.458860] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.467567] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.467595] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.477443] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.477469] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.486245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.486291] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.496052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.496078] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.545 [2024-04-24 21:23:40.505419] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.545 [2024-04-24 21:23:40.505446] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.805 [2024-04-24 21:23:40.514712] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.805 [2024-04-24 21:23:40.514742] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.805 [2024-04-24 21:23:40.523404] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.805 [2024-04-24 21:23:40.523428] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.805 [2024-04-24 21:23:40.533146] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.805 [2024-04-24 21:23:40.533173] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.805 [2024-04-24 21:23:40.542493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.805 [2024-04-24 21:23:40.542520] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.805 [2024-04-24 21:23:40.551054] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.805 [2024-04-24 21:23:40.551085] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.805 [2024-04-24 21:23:40.560899] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.805 [2024-04-24 21:23:40.560926] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.805 [2024-04-24 21:23:40.569601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.569627] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.578716] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.578743] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.588038] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.588066] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.597127] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.597152] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.606912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.606940] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.616664] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.616690] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.625894] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.625921] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.634615] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.634641] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.643869] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.643893] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.653214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.653240] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.662915] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.662940] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.671708] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.671734] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.680997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.681023] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.690814] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.690841] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.700150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.700177] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.709214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.709239] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.718996] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.719020] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.728936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.728964] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.738465] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.738492] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.747577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.747604] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.757345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.757373] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-04-24 21:23:40.766662] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-04-24 21:23:40.766688] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.775679] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.775706] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.785579] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.785604] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.794795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.794818] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.804533] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.804558] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.814278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.814305] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.823502] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.823527] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.833449] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.833477] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.841230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.841255] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.851830] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.851856] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.861072] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.861098] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.870265] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.870296] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.879381] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.879407] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.889099] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.889124] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.898205] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.898227] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.907617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.907641] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.917315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.917342] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.926914] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.926940] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.936582] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.936608] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.945306] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.945331] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.954975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.955001] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.964388] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.964413] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.973004] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.973029] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.982765] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.982789] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:40.992022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:40.992048] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:41.001322] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:41.001349] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:41.010993] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:41.011019] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:41.020311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:41.020337] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-04-24 21:23:41.029475] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-04-24 21:23:41.029502] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.326 [2024-04-24 21:23:41.039242] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.326 [2024-04-24 21:23:41.039273] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.326 [2024-04-24 21:23:41.048512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.326 [2024-04-24 21:23:41.048538] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.326 [2024-04-24 21:23:41.057663] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.326 [2024-04-24 21:23:41.057688] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.326 [2024-04-24 21:23:41.066864] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.326 [2024-04-24 21:23:41.066889] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.326 [2024-04-24 21:23:41.076011] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.326 [2024-04-24 21:23:41.076039] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.326 [2024-04-24 21:23:41.084763] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.326 [2024-04-24 21:23:41.084789] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.326 [2024-04-24 21:23:41.093889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.326 [2024-04-24 21:23:41.093915] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.326 [2024-04-24 21:23:41.103752] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.326 [2024-04-24 21:23:41.103776] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.326 [2024-04-24 21:23:41.112350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.326 [2024-04-24 21:23:41.112375] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.326 [2024-04-24 21:23:41.121477] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.326 [2024-04-24 21:23:41.121505] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.326 [2024-04-24 21:23:41.131255] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.326 [2024-04-24 21:23:41.131285] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.140921] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.140947] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.149702] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.149726] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.158774] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.158798] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.168440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.168463] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.178234] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.178260] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.187110] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.187135] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.196360] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.196386] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.206161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.206186] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.215497] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.215521] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.225441] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.225465] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.234694] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.234718] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.243788] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.243814] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.253228] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.253252] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.263183] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.263212] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.272578] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.272604] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.281946] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.281971] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-04-24 21:23:41.291071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-04-24 21:23:41.291095] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.300854] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.300882] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.310250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.310279] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.319548] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.319574] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.329411] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.329435] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.338804] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.338831] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.348115] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.348139] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.357772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.357799] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.366517] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.366542] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.376436] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.376462] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.385698] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.385723] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.394823] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.394848] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.403858] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.403881] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.412668] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.412695] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.421788] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.421812] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.431253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.431290] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.440562] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.440587] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.449968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.449997] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.459147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.459171] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.468805] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.468830] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.478189] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.478213] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.487948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.487975] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.497374] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.497399] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.507096] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.507120] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.516450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.516475] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.525460] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.525486] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.535221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.535246] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.585 [2024-04-24 21:23:41.545025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.585 [2024-04-24 21:23:41.545050] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.844 [2024-04-24 21:23:41.554256] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.844 [2024-04-24 21:23:41.554284] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.844 [2024-04-24 21:23:41.563444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.844 [2024-04-24 21:23:41.563467] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.844 [2024-04-24 21:23:41.572665] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.844 [2024-04-24 21:23:41.572693] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.844 [2024-04-24 21:23:41.581872] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.844 [2024-04-24 21:23:41.581899] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.844 [2024-04-24 21:23:41.591261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.844 [2024-04-24 21:23:41.591291] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.844 [2024-04-24 21:23:41.600445] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.844 [2024-04-24 21:23:41.600470] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.844 [2024-04-24 21:23:41.609745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.844 [2024-04-24 21:23:41.609773] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.844 [2024-04-24 21:23:41.618859] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.844 [2024-04-24 21:23:41.618883] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.844 [2024-04-24 21:23:41.628564] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.844 [2024-04-24 21:23:41.628591] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.844 [2024-04-24 21:23:41.637097] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.844 [2024-04-24 21:23:41.637122] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.844 [2024-04-24 21:23:41.646235] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.844 [2024-04-24 21:23:41.646262] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.844 [2024-04-24 21:23:41.655352] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.844 [2024-04-24 21:23:41.655378] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.844 [2024-04-24 21:23:41.664644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.844 [2024-04-24 21:23:41.664669] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.673828] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.673853] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.683682] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.683708] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.692695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.692719] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.698954] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.698978] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 00:18:26.845 Latency(us) 00:18:26.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.845 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:26.845 Nvme1n1 : 5.01 16792.13 131.19 0.00 0.00 7616.29 3345.79 16004.58 00:18:26.845 =================================================================================================================== 00:18:26.845 Total : 16792.13 131.19 0.00 0.00 7616.29 3345.79 16004.58 00:18:26.845 [2024-04-24 21:23:41.706941] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.706963] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.714929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.714951] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.722934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.722949] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.730933] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.730948] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.738929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.738943] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.746936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.746954] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.754929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.754943] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.762938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.762954] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.770939] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.770954] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.778931] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.778946] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.786944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.786960] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.794933] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.794947] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.845 [2024-04-24 21:23:41.802944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.845 [2024-04-24 21:23:41.802958] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.810947] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.810962] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.818943] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.818956] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.826959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.826972] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.834954] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.834968] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.842969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.842985] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.850968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.850984] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.858953] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.858967] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.866961] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.866976] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.874962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.874977] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.882954] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.882968] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.890968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.890982] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.898971] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.898990] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.906966] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.906981] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.914979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.914993] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.922980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.922994] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.930984] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.930998] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.938985] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.939000] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.946972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.946987] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.954996] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.955011] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.962990] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.963005] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.970977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.970991] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.978992] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.979008] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-04-24 21:23:41.986987] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-04-24 21:23:41.987002] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.107 [2024-04-24 21:23:41.995003] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.107 [2024-04-24 21:23:41.995018] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.107 [2024-04-24 21:23:42.003000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.107 [2024-04-24 21:23:42.003016] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.107 [2024-04-24 21:23:42.010997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.107 [2024-04-24 21:23:42.011012] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.107 [2024-04-24 21:23:42.019015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.107 [2024-04-24 21:23:42.019031] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.107 [2024-04-24 21:23:42.027005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.107 [2024-04-24 21:23:42.027020] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.107 [2024-04-24 21:23:42.034999] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.107 [2024-04-24 21:23:42.035014] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.107 [2024-04-24 21:23:42.043011] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.107 [2024-04-24 21:23:42.043025] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.107 [2024-04-24 21:23:42.051007] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.107 [2024-04-24 21:23:42.051022] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.107 [2024-04-24 21:23:42.059017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.107 [2024-04-24 21:23:42.059032] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.107 [2024-04-24 21:23:42.067021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.107 [2024-04-24 21:23:42.067037] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.368 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1217292) - No such process 00:18:27.368 21:23:42 -- target/zcopy.sh@49 -- # wait 1217292 00:18:27.368 21:23:42 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:27.368 21:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.368 21:23:42 -- common/autotest_common.sh@10 -- # set +x 00:18:27.368 21:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.368 21:23:42 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:27.368 21:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.368 21:23:42 -- common/autotest_common.sh@10 -- # set +x 00:18:27.368 delay0 00:18:27.368 21:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.368 21:23:42 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:27.368 21:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.368 21:23:42 -- common/autotest_common.sh@10 -- # set +x 00:18:27.368 21:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.368 21:23:42 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:27.368 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.368 [2024-04-24 21:23:42.225897] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:33.944 Initializing NVMe Controllers 00:18:33.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:33.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:33.944 Initialization complete. Launching workers. 00:18:33.944 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 776 00:18:33.944 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1049, failed to submit 47 00:18:33.944 success 856, unsuccess 193, failed 0 00:18:33.944 21:23:48 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:33.944 21:23:48 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:33.944 21:23:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:33.944 21:23:48 -- nvmf/common.sh@117 -- # sync 00:18:33.944 21:23:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:33.944 21:23:48 -- nvmf/common.sh@120 -- # set +e 00:18:33.944 21:23:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:33.944 21:23:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:33.944 rmmod nvme_tcp 00:18:33.944 rmmod nvme_fabrics 00:18:33.944 rmmod nvme_keyring 00:18:33.944 21:23:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:33.944 21:23:48 -- nvmf/common.sh@124 -- # set -e 00:18:33.944 21:23:48 -- nvmf/common.sh@125 -- # return 0 00:18:33.944 21:23:48 -- nvmf/common.sh@478 -- # '[' -n 1214890 ']' 00:18:33.944 21:23:48 -- nvmf/common.sh@479 -- # killprocess 1214890 00:18:33.944 21:23:48 -- common/autotest_common.sh@936 -- # '[' -z 1214890 ']' 00:18:33.944 21:23:48 -- common/autotest_common.sh@940 -- # kill -0 1214890 00:18:33.944 21:23:48 -- common/autotest_common.sh@941 -- # uname 00:18:33.944 21:23:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:33.944 21:23:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1214890 00:18:33.944 21:23:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:33.944 21:23:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:33.944 21:23:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1214890' 00:18:33.944 killing process with pid 1214890 00:18:33.944 21:23:48 -- common/autotest_common.sh@955 -- # kill 1214890 00:18:33.944 21:23:48 -- common/autotest_common.sh@960 -- # wait 1214890 00:18:34.204 21:23:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:34.204 21:23:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:34.204 21:23:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:34.204 21:23:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.204 21:23:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:34.204 21:23:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.204 21:23:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.204 21:23:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.242 21:23:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:36.242 00:18:36.242 real 0m32.674s 00:18:36.242 user 0m47.554s 00:18:36.242 sys 0m7.680s 00:18:36.242 21:23:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:36.242 21:23:51 -- common/autotest_common.sh@10 -- # set +x 00:18:36.242 ************************************ 00:18:36.242 END TEST nvmf_zcopy 00:18:36.242 ************************************ 00:18:36.242 21:23:51 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:36.242 21:23:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:36.242 21:23:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:36.242 21:23:51 -- common/autotest_common.sh@10 -- # set +x 00:18:36.502 ************************************ 00:18:36.502 START TEST nvmf_nmic 00:18:36.502 ************************************ 00:18:36.502 21:23:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:36.502 * Looking for test storage... 00:18:36.502 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:36.502 21:23:51 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.502 21:23:51 -- nvmf/common.sh@7 -- # uname -s 00:18:36.502 21:23:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.502 21:23:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.502 21:23:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.502 21:23:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.502 21:23:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.502 21:23:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.502 21:23:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.502 21:23:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.502 21:23:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.502 21:23:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.502 21:23:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:18:36.502 21:23:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:18:36.502 21:23:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.502 21:23:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.502 21:23:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:36.502 21:23:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.502 21:23:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:36.502 21:23:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.502 21:23:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.502 21:23:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.502 21:23:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.502 21:23:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.502 21:23:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.502 21:23:51 -- paths/export.sh@5 -- # export PATH 00:18:36.502 21:23:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.502 21:23:51 -- nvmf/common.sh@47 -- # : 0 00:18:36.502 21:23:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:36.502 21:23:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:36.502 21:23:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.502 21:23:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.502 21:23:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.502 21:23:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:36.502 21:23:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:36.502 21:23:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:36.502 21:23:51 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:36.502 21:23:51 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:36.502 21:23:51 -- target/nmic.sh@14 -- # nvmftestinit 00:18:36.502 21:23:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:36.502 21:23:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.502 21:23:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:36.502 21:23:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:36.502 21:23:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:36.502 21:23:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.502 21:23:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.502 21:23:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.502 21:23:51 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:18:36.502 21:23:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:36.502 21:23:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:36.502 21:23:51 -- common/autotest_common.sh@10 -- # set +x 00:18:41.776 21:23:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:41.776 21:23:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:41.776 21:23:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:41.776 21:23:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:41.776 21:23:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:41.776 21:23:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:41.776 21:23:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:41.776 21:23:56 -- nvmf/common.sh@295 -- # net_devs=() 00:18:41.776 21:23:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:41.776 21:23:56 -- nvmf/common.sh@296 -- # e810=() 00:18:41.776 21:23:56 -- nvmf/common.sh@296 -- # local -ga e810 00:18:41.776 21:23:56 -- nvmf/common.sh@297 -- # x722=() 00:18:41.777 21:23:56 -- nvmf/common.sh@297 -- # local -ga x722 00:18:41.777 21:23:56 -- nvmf/common.sh@298 -- # mlx=() 00:18:41.777 21:23:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:41.777 21:23:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:41.777 21:23:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:41.777 21:23:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:41.777 21:23:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:41.777 21:23:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:41.777 21:23:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:41.777 21:23:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:41.777 21:23:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:41.777 21:23:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:41.777 21:23:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:41.777 21:23:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:41.777 21:23:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:41.777 21:23:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:41.777 21:23:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:41.777 21:23:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:41.777 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:41.777 21:23:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:41.777 21:23:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:41.777 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:41.777 21:23:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:41.777 21:23:56 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:41.777 21:23:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.777 21:23:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:41.777 21:23:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.777 21:23:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:41.777 Found net devices under 0000:27:00.0: cvl_0_0 00:18:41.777 21:23:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.777 21:23:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:41.777 21:23:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.777 21:23:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:41.777 21:23:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.777 21:23:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:41.777 Found net devices under 0000:27:00.1: cvl_0_1 00:18:41.777 21:23:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.777 21:23:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:41.777 21:23:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:41.777 21:23:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:41.777 21:23:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.777 21:23:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.777 21:23:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:41.777 21:23:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:41.777 21:23:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:41.777 21:23:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:41.777 21:23:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:41.777 21:23:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:41.777 21:23:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.777 21:23:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:41.777 21:23:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:41.777 21:23:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:41.777 21:23:56 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:41.777 21:23:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:41.777 21:23:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:41.777 21:23:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:41.777 21:23:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:41.777 21:23:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:41.777 21:23:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:41.777 21:23:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:41.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:18:41.777 00:18:41.777 --- 10.0.0.2 ping statistics --- 00:18:41.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.777 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:18:41.777 21:23:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:41.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:18:41.777 00:18:41.777 --- 10.0.0.1 ping statistics --- 00:18:41.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.777 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:18:41.777 21:23:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.777 21:23:56 -- nvmf/common.sh@411 -- # return 0 00:18:41.777 21:23:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:41.777 21:23:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.777 21:23:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:41.777 21:23:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.777 21:23:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:41.777 21:23:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:42.036 21:23:56 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:42.036 21:23:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:42.036 21:23:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:42.036 21:23:56 -- common/autotest_common.sh@10 -- # set +x 00:18:42.036 21:23:56 -- nvmf/common.sh@470 -- # nvmfpid=1223583 00:18:42.036 21:23:56 -- nvmf/common.sh@471 -- # waitforlisten 1223583 00:18:42.036 21:23:56 -- common/autotest_common.sh@817 -- # '[' -z 1223583 ']' 00:18:42.036 21:23:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.036 21:23:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:42.036 21:23:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.036 21:23:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:42.036 21:23:56 -- common/autotest_common.sh@10 -- # set +x 00:18:42.036 21:23:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:42.036 [2024-04-24 21:23:56.854621] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:18:42.036 [2024-04-24 21:23:56.854732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.036 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.036 [2024-04-24 21:23:56.978326] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.295 [2024-04-24 21:23:57.078522] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.295 [2024-04-24 21:23:57.078557] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.295 [2024-04-24 21:23:57.078569] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.295 [2024-04-24 21:23:57.078578] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.295 [2024-04-24 21:23:57.078585] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.295 [2024-04-24 21:23:57.078738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.295 [2024-04-24 21:23:57.078837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.295 [2024-04-24 21:23:57.078937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.295 [2024-04-24 21:23:57.078947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.862 21:23:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:42.863 21:23:57 -- common/autotest_common.sh@850 -- # return 0 00:18:42.863 21:23:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:42.863 21:23:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:42.863 21:23:57 -- common/autotest_common.sh@10 -- # set +x 00:18:42.863 21:23:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.863 21:23:57 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.863 21:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.863 21:23:57 -- common/autotest_common.sh@10 -- # set +x 00:18:42.863 [2024-04-24 21:23:57.585026] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.863 21:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.863 21:23:57 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:42.863 21:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.863 21:23:57 -- common/autotest_common.sh@10 -- # set +x 00:18:42.863 Malloc0 00:18:42.863 21:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.863 21:23:57 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:42.863 21:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.863 21:23:57 -- common/autotest_common.sh@10 -- # set +x 00:18:42.863 21:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.863 21:23:57 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:42.863 21:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.863 21:23:57 -- common/autotest_common.sh@10 -- # set +x 00:18:42.863 21:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.863 21:23:57 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.863 21:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.863 21:23:57 -- common/autotest_common.sh@10 -- # set +x 00:18:42.863 [2024-04-24 21:23:57.652102] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.863 21:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.863 21:23:57 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:42.863 test case1: single bdev can't be used in multiple subsystems 00:18:42.863 21:23:57 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:42.863 21:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.863 21:23:57 -- common/autotest_common.sh@10 -- # set +x 00:18:42.863 21:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.863 21:23:57 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:42.863 21:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.863 21:23:57 -- common/autotest_common.sh@10 -- # set +x 00:18:42.863 21:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.863 21:23:57 -- target/nmic.sh@28 -- # nmic_status=0 00:18:42.863 21:23:57 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:42.863 21:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.863 21:23:57 -- common/autotest_common.sh@10 -- # set +x 00:18:42.863 [2024-04-24 21:23:57.675925] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:42.863 [2024-04-24 21:23:57.675955] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:42.863 [2024-04-24 21:23:57.675967] nvmf_rpc.c:1529:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.863 request: 00:18:42.863 { 00:18:42.863 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:42.863 "namespace": { 00:18:42.863 "bdev_name": "Malloc0", 00:18:42.863 "no_auto_visible": false 00:18:42.863 }, 00:18:42.863 "method": "nvmf_subsystem_add_ns", 00:18:42.863 "req_id": 1 00:18:42.863 } 00:18:42.863 Got JSON-RPC error response 00:18:42.863 response: 00:18:42.863 { 00:18:42.863 "code": -32602, 00:18:42.863 "message": "Invalid parameters" 00:18:42.863 } 00:18:42.863 21:23:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:42.863 21:23:57 -- target/nmic.sh@29 -- # nmic_status=1 00:18:42.863 21:23:57 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:42.863 21:23:57 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:42.863 Adding namespace failed - expected result. 00:18:42.863 21:23:57 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:42.863 test case2: host connect to nvmf target in multiple paths 00:18:42.863 21:23:57 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:42.863 21:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.863 21:23:57 -- common/autotest_common.sh@10 -- # set +x 00:18:42.863 [2024-04-24 21:23:57.684058] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:42.863 21:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.863 21:23:57 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:44.244 21:23:59 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:45.638 21:24:00 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:45.638 21:24:00 -- common/autotest_common.sh@1184 -- # local i=0 00:18:45.638 21:24:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:45.638 21:24:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:45.638 21:24:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:48.177 21:24:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:48.177 21:24:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:48.177 21:24:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:48.177 21:24:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:48.177 21:24:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:48.177 21:24:02 -- common/autotest_common.sh@1194 -- # return 0 00:18:48.177 21:24:02 -- target/nmic.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:48.177 [global] 00:18:48.177 thread=1 00:18:48.177 invalidate=1 00:18:48.177 rw=write 00:18:48.177 time_based=1 00:18:48.177 runtime=1 00:18:48.177 ioengine=libaio 00:18:48.177 direct=1 00:18:48.177 bs=4096 00:18:48.177 iodepth=1 00:18:48.177 norandommap=0 00:18:48.177 numjobs=1 00:18:48.177 00:18:48.177 verify_dump=1 00:18:48.177 verify_backlog=512 00:18:48.177 verify_state_save=0 00:18:48.177 do_verify=1 00:18:48.177 verify=crc32c-intel 00:18:48.177 [job0] 00:18:48.177 filename=/dev/nvme0n1 00:18:48.177 Could not set queue depth (nvme0n1) 00:18:48.177 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:48.177 fio-3.35 00:18:48.177 Starting 1 thread 00:18:49.555 00:18:49.555 job0: (groupid=0, jobs=1): err= 0: pid=1225082: Wed Apr 24 21:24:04 2024 00:18:49.555 read: IOPS=21, BW=86.5KiB/s (88.6kB/s)(88.0KiB/1017msec) 00:18:49.555 slat (nsec): min=7994, max=48379, avg=33852.77, stdev=12837.22 00:18:49.555 clat (usec): min=692, max=41995, avg=39584.84, stdev=8698.26 00:18:49.555 lat (usec): min=725, max=42036, avg=39618.69, stdev=8698.62 00:18:49.555 clat percentiles (usec): 00:18:49.555 | 1.00th=[ 693], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:49.555 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:18:49.555 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:18:49.555 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:49.555 | 99.99th=[42206] 00:18:49.555 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:18:49.555 slat (usec): min=6, max=27380, avg=62.16, stdev=1209.67 00:18:49.555 clat (usec): min=166, max=598, avg=218.30, stdev=29.69 00:18:49.555 lat (usec): min=173, max=27897, avg=280.46, stdev=1223.21 00:18:49.555 clat percentiles (usec): 00:18:49.555 | 1.00th=[ 180], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 202], 00:18:49.555 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 212], 00:18:49.555 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 249], 00:18:49.555 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 594], 99.95th=[ 594], 00:18:49.555 | 99.99th=[ 594] 00:18:49.555 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:49.555 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:49.555 lat (usec) : 250=91.76%, 500=3.75%, 750=0.56% 00:18:49.555 lat (msec) : 50=3.93% 00:18:49.555 cpu : usr=0.20%, sys=0.59%, ctx=537, majf=0, minf=2 00:18:49.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.555 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:49.555 00:18:49.555 Run status group 0 (all jobs): 00:18:49.555 READ: bw=86.5KiB/s (88.6kB/s), 86.5KiB/s-86.5KiB/s (88.6kB/s-88.6kB/s), io=88.0KiB (90.1kB), run=1017-1017msec 00:18:49.555 WRITE: bw=2014KiB/s (2062kB/s), 2014KiB/s-2014KiB/s (2062kB/s-2062kB/s), io=2048KiB (2097kB), run=1017-1017msec 00:18:49.555 00:18:49.555 Disk stats (read/write): 00:18:49.555 nvme0n1: ios=45/512, merge=0/0, ticks=1743/109, in_queue=1852, util=98.90% 00:18:49.555 21:24:04 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:49.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:49.555 21:24:04 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:49.555 21:24:04 -- common/autotest_common.sh@1205 -- # local i=0 00:18:49.555 21:24:04 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:49.555 21:24:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:49.815 21:24:04 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:49.815 21:24:04 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:49.815 21:24:04 -- common/autotest_common.sh@1217 -- # return 0 00:18:49.815 21:24:04 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:49.815 21:24:04 -- target/nmic.sh@53 -- # nvmftestfini 00:18:49.815 21:24:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:49.815 21:24:04 -- nvmf/common.sh@117 -- # sync 00:18:49.815 21:24:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.815 21:24:04 -- nvmf/common.sh@120 -- # set +e 00:18:49.815 21:24:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.815 21:24:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.815 rmmod nvme_tcp 00:18:49.815 rmmod nvme_fabrics 00:18:49.815 rmmod nvme_keyring 00:18:49.815 21:24:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.815 21:24:04 -- nvmf/common.sh@124 -- # set -e 00:18:49.815 21:24:04 -- nvmf/common.sh@125 -- # return 0 00:18:49.815 21:24:04 -- nvmf/common.sh@478 -- # '[' -n 1223583 ']' 00:18:49.815 21:24:04 -- nvmf/common.sh@479 -- # killprocess 1223583 00:18:49.815 21:24:04 -- common/autotest_common.sh@936 -- # '[' -z 1223583 ']' 00:18:49.815 21:24:04 -- common/autotest_common.sh@940 -- # kill -0 1223583 00:18:49.815 21:24:04 -- common/autotest_common.sh@941 -- # uname 00:18:49.815 21:24:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:49.815 21:24:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1223583 00:18:49.815 21:24:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:49.815 21:24:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:49.815 21:24:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1223583' 00:18:49.815 killing process with pid 1223583 00:18:49.815 21:24:04 -- common/autotest_common.sh@955 -- # kill 1223583 00:18:49.815 21:24:04 -- common/autotest_common.sh@960 -- # wait 1223583 00:18:50.386 21:24:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:50.386 21:24:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:50.386 21:24:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:50.386 21:24:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.386 21:24:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:50.386 21:24:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.386 21:24:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.386 21:24:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.291 21:24:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:52.291 00:18:52.291 real 0m15.932s 00:18:52.291 user 0m51.101s 00:18:52.291 sys 0m4.750s 00:18:52.291 21:24:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:52.291 21:24:07 -- common/autotest_common.sh@10 -- # set +x 00:18:52.291 ************************************ 00:18:52.291 END TEST nvmf_nmic 00:18:52.291 ************************************ 00:18:52.550 21:24:07 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:52.550 21:24:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:52.550 21:24:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:52.550 21:24:07 -- common/autotest_common.sh@10 -- # set +x 00:18:52.550 ************************************ 00:18:52.550 START TEST nvmf_fio_target 00:18:52.550 ************************************ 00:18:52.550 21:24:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:52.550 * Looking for test storage... 00:18:52.550 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:52.550 21:24:07 -- target/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.551 21:24:07 -- nvmf/common.sh@7 -- # uname -s 00:18:52.551 21:24:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.551 21:24:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.551 21:24:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.551 21:24:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.551 21:24:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.551 21:24:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.551 21:24:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.551 21:24:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.551 21:24:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.551 21:24:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.551 21:24:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:18:52.551 21:24:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:18:52.551 21:24:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.551 21:24:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.551 21:24:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:52.551 21:24:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.551 21:24:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:52.551 21:24:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.551 21:24:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.551 21:24:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.551 21:24:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.551 21:24:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.551 21:24:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.551 21:24:07 -- paths/export.sh@5 -- # export PATH 00:18:52.551 21:24:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.551 21:24:07 -- nvmf/common.sh@47 -- # : 0 00:18:52.551 21:24:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.551 21:24:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.551 21:24:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.551 21:24:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.551 21:24:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.551 21:24:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.551 21:24:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.551 21:24:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.551 21:24:07 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:52.551 21:24:07 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:52.551 21:24:07 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:52.551 21:24:07 -- target/fio.sh@16 -- # nvmftestinit 00:18:52.551 21:24:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:52.551 21:24:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.551 21:24:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:52.551 21:24:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:52.551 21:24:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:52.551 21:24:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.551 21:24:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.551 21:24:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.551 21:24:07 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:18:52.551 21:24:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:52.551 21:24:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:52.551 21:24:07 -- common/autotest_common.sh@10 -- # set +x 00:18:57.836 21:24:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:57.836 21:24:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:57.836 21:24:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:57.836 21:24:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:57.836 21:24:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:57.836 21:24:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:57.836 21:24:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:57.836 21:24:12 -- nvmf/common.sh@295 -- # net_devs=() 00:18:57.836 21:24:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:57.836 21:24:12 -- nvmf/common.sh@296 -- # e810=() 00:18:57.836 21:24:12 -- nvmf/common.sh@296 -- # local -ga e810 00:18:57.836 21:24:12 -- nvmf/common.sh@297 -- # x722=() 00:18:57.836 21:24:12 -- nvmf/common.sh@297 -- # local -ga x722 00:18:57.836 21:24:12 -- nvmf/common.sh@298 -- # mlx=() 00:18:57.837 21:24:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:57.837 21:24:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.837 21:24:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.837 21:24:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.837 21:24:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.837 21:24:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.837 21:24:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.837 21:24:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.837 21:24:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.837 21:24:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.837 21:24:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.837 21:24:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.837 21:24:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:57.837 21:24:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:57.837 21:24:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.837 21:24:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:57.837 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:57.837 21:24:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.837 21:24:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:57.837 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:57.837 21:24:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:57.837 21:24:12 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.837 21:24:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.837 21:24:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:57.837 21:24:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.837 21:24:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:57.837 Found net devices under 0000:27:00.0: cvl_0_0 00:18:57.837 21:24:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.837 21:24:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.837 21:24:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.837 21:24:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:57.837 21:24:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.837 21:24:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:57.837 Found net devices under 0000:27:00.1: cvl_0_1 00:18:57.837 21:24:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.837 21:24:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:57.837 21:24:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:57.837 21:24:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:57.837 21:24:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:57.837 21:24:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.837 21:24:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.837 21:24:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.837 21:24:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:57.837 21:24:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.837 21:24:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.837 21:24:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:57.837 21:24:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.837 21:24:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.837 21:24:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:57.837 21:24:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:57.837 21:24:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.837 21:24:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.837 21:24:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.837 21:24:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.837 21:24:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:57.837 21:24:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.837 21:24:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.096 21:24:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.096 21:24:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:58.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:18:58.096 00:18:58.096 --- 10.0.0.2 ping statistics --- 00:18:58.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.096 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:18:58.096 21:24:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:18:58.096 00:18:58.096 --- 10.0.0.1 ping statistics --- 00:18:58.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.096 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:58.096 21:24:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.096 21:24:12 -- nvmf/common.sh@411 -- # return 0 00:18:58.096 21:24:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:58.096 21:24:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.096 21:24:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:58.096 21:24:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:58.096 21:24:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.096 21:24:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:58.096 21:24:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:58.096 21:24:12 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:58.096 21:24:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:58.096 21:24:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:58.096 21:24:12 -- common/autotest_common.sh@10 -- # set +x 00:18:58.096 21:24:12 -- nvmf/common.sh@470 -- # nvmfpid=1229733 00:18:58.096 21:24:12 -- nvmf/common.sh@471 -- # waitforlisten 1229733 00:18:58.096 21:24:12 -- common/autotest_common.sh@817 -- # '[' -z 1229733 ']' 00:18:58.096 21:24:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.096 21:24:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:58.096 21:24:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.096 21:24:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:58.096 21:24:12 -- common/autotest_common.sh@10 -- # set +x 00:18:58.096 21:24:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:58.096 [2024-04-24 21:24:12.935038] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:18:58.096 [2024-04-24 21:24:12.935140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.096 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.096 [2024-04-24 21:24:13.052080] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.354 [2024-04-24 21:24:13.151082] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.354 [2024-04-24 21:24:13.151118] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.354 [2024-04-24 21:24:13.151129] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.354 [2024-04-24 21:24:13.151137] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.354 [2024-04-24 21:24:13.151145] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.354 [2024-04-24 21:24:13.151200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.354 [2024-04-24 21:24:13.151310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.354 [2024-04-24 21:24:13.151380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.354 [2024-04-24 21:24:13.151390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.924 21:24:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:58.924 21:24:13 -- common/autotest_common.sh@850 -- # return 0 00:18:58.924 21:24:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:58.924 21:24:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:58.924 21:24:13 -- common/autotest_common.sh@10 -- # set +x 00:18:58.924 21:24:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.924 21:24:13 -- target/fio.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:58.924 [2024-04-24 21:24:13.808105] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.924 21:24:13 -- target/fio.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.185 21:24:14 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:59.185 21:24:14 -- target/fio.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.445 21:24:14 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:59.445 21:24:14 -- target/fio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.445 21:24:14 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:59.445 21:24:14 -- target/fio.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.703 21:24:14 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:59.704 21:24:14 -- target/fio.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:59.962 21:24:14 -- target/fio.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:59.962 21:24:14 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:59.962 21:24:14 -- target/fio.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.222 21:24:15 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:00.222 21:24:15 -- target/fio.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:00.483 21:24:15 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:00.483 21:24:15 -- target/fio.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:00.483 21:24:15 -- target/fio.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:00.743 21:24:15 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:00.743 21:24:15 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:00.743 21:24:15 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:00.743 21:24:15 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:01.004 21:24:15 -- target/fio.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.004 [2024-04-24 21:24:15.966073] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.262 21:24:15 -- target/fio.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:01.262 21:24:16 -- target/fio.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:01.521 21:24:16 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:02.896 21:24:17 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:02.896 21:24:17 -- common/autotest_common.sh@1184 -- # local i=0 00:19:02.896 21:24:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.896 21:24:17 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:19:02.896 21:24:17 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:19:02.896 21:24:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:05.428 21:24:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:05.428 21:24:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:05.428 21:24:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:05.428 21:24:19 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:19:05.428 21:24:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.428 21:24:19 -- common/autotest_common.sh@1194 -- # return 0 00:19:05.428 21:24:19 -- target/fio.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:05.428 [global] 00:19:05.428 thread=1 00:19:05.428 invalidate=1 00:19:05.428 rw=write 00:19:05.428 time_based=1 00:19:05.428 runtime=1 00:19:05.428 ioengine=libaio 00:19:05.428 direct=1 00:19:05.428 bs=4096 00:19:05.428 iodepth=1 00:19:05.428 norandommap=0 00:19:05.428 numjobs=1 00:19:05.428 00:19:05.428 verify_dump=1 00:19:05.428 verify_backlog=512 00:19:05.428 verify_state_save=0 00:19:05.428 do_verify=1 00:19:05.428 verify=crc32c-intel 00:19:05.428 [job0] 00:19:05.428 filename=/dev/nvme0n1 00:19:05.428 [job1] 00:19:05.428 filename=/dev/nvme0n2 00:19:05.428 [job2] 00:19:05.428 filename=/dev/nvme0n3 00:19:05.428 [job3] 00:19:05.428 filename=/dev/nvme0n4 00:19:05.428 Could not set queue depth (nvme0n1) 00:19:05.428 Could not set queue depth (nvme0n2) 00:19:05.428 Could not set queue depth (nvme0n3) 00:19:05.428 Could not set queue depth (nvme0n4) 00:19:05.428 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.428 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.428 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.428 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.428 fio-3.35 00:19:05.428 Starting 4 threads 00:19:06.807 00:19:06.807 job0: (groupid=0, jobs=1): err= 0: pid=1231421: Wed Apr 24 21:24:21 2024 00:19:06.807 read: IOPS=494, BW=1977KiB/s (2024kB/s)(2044KiB/1034msec) 00:19:06.807 slat (nsec): min=3542, max=46160, avg=20039.78, stdev=9659.26 00:19:06.807 clat (usec): min=234, max=42946, avg=1796.84, stdev=7255.75 00:19:06.807 lat (usec): min=238, max=42954, avg=1816.88, stdev=7258.77 00:19:06.807 clat percentiles (usec): 00:19:06.807 | 1.00th=[ 265], 5.00th=[ 302], 10.00th=[ 338], 20.00th=[ 379], 00:19:06.807 | 30.00th=[ 408], 40.00th=[ 437], 50.00th=[ 478], 60.00th=[ 502], 00:19:06.807 | 70.00th=[ 529], 80.00th=[ 635], 90.00th=[ 758], 95.00th=[ 889], 00:19:06.807 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:19:06.807 | 99.99th=[42730] 00:19:06.807 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:19:06.807 slat (nsec): min=5302, max=51786, avg=7212.59, stdev=2193.51 00:19:06.807 clat (usec): min=133, max=470, avg=189.95, stdev=26.00 00:19:06.807 lat (usec): min=140, max=522, avg=197.16, stdev=27.08 00:19:06.807 clat percentiles (usec): 00:19:06.807 | 1.00th=[ 145], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 169], 00:19:06.807 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:19:06.807 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 227], 00:19:06.807 | 99.00th=[ 251], 99.50th=[ 281], 99.90th=[ 469], 99.95th=[ 469], 00:19:06.807 | 99.99th=[ 469] 00:19:06.807 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.807 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.807 lat (usec) : 250=49.46%, 500=30.21%, 750=15.15%, 1000=3.52% 00:19:06.807 lat (msec) : 2=0.10%, 50=1.56% 00:19:06.807 cpu : usr=0.58%, sys=2.23%, ctx=1023, majf=0, minf=1 00:19:06.807 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.807 issued rwts: total=511,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.807 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.807 job1: (groupid=0, jobs=1): err= 0: pid=1231422: Wed Apr 24 21:24:21 2024 00:19:06.807 read: IOPS=22, BW=90.4KiB/s (92.5kB/s)(92.0KiB/1018msec) 00:19:06.807 slat (nsec): min=7672, max=44785, avg=35863.30, stdev=11003.80 00:19:06.807 clat (usec): min=618, max=42121, avg=39900.44, stdev=8574.96 00:19:06.807 lat (usec): min=661, max=42164, avg=39936.30, stdev=8573.79 00:19:06.807 clat percentiles (usec): 00:19:06.807 | 1.00th=[ 619], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:06.807 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:19:06.807 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:06.807 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:06.807 | 99.99th=[42206] 00:19:06.807 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:19:06.807 slat (nsec): min=5197, max=38782, avg=7197.94, stdev=1909.64 00:19:06.807 clat (usec): min=129, max=696, avg=185.24, stdev=39.99 00:19:06.807 lat (usec): min=135, max=703, avg=192.44, stdev=40.73 00:19:06.807 clat percentiles (usec): 00:19:06.807 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 00:19:06.807 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:19:06.807 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 225], 00:19:06.807 | 99.00th=[ 247], 99.50th=[ 545], 99.90th=[ 701], 99.95th=[ 701], 00:19:06.807 | 99.99th=[ 701] 00:19:06.807 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.807 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.807 lat (usec) : 250=95.14%, 750=0.75% 00:19:06.807 lat (msec) : 50=4.11% 00:19:06.807 cpu : usr=0.10%, sys=0.49%, ctx=535, majf=0, minf=1 00:19:06.807 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.807 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.807 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.807 job2: (groupid=0, jobs=1): err= 0: pid=1231423: Wed Apr 24 21:24:21 2024 00:19:06.807 read: IOPS=21, BW=85.6KiB/s (87.7kB/s)(88.0KiB/1028msec) 00:19:06.807 slat (nsec): min=6675, max=43390, avg=38263.82, stdev=8206.09 00:19:06.807 clat (usec): min=40952, max=42157, avg=41852.78, stdev=303.52 00:19:06.807 lat (usec): min=40988, max=42200, avg=41891.04, stdev=303.97 00:19:06.807 clat percentiles (usec): 00:19:06.807 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:19:06.807 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:19:06.807 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:06.807 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:06.807 | 99.99th=[42206] 00:19:06.807 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:19:06.807 slat (nsec): min=5598, max=47275, avg=7491.16, stdev=2062.51 00:19:06.807 clat (usec): min=135, max=559, avg=197.28, stdev=38.84 00:19:06.807 lat (usec): min=143, max=607, avg=204.77, stdev=39.92 00:19:06.807 clat percentiles (usec): 00:19:06.807 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 159], 20.00th=[ 167], 00:19:06.807 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 196], 00:19:06.807 | 70.00th=[ 208], 80.00th=[ 227], 90.00th=[ 245], 95.00th=[ 265], 00:19:06.807 | 99.00th=[ 314], 99.50th=[ 318], 99.90th=[ 562], 99.95th=[ 562], 00:19:06.807 | 99.99th=[ 562] 00:19:06.807 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.807 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.807 lat (usec) : 250=88.95%, 500=6.74%, 750=0.19% 00:19:06.807 lat (msec) : 50=4.12% 00:19:06.807 cpu : usr=0.19%, sys=0.39%, ctx=537, majf=0, minf=2 00:19:06.807 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.807 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.807 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.807 job3: (groupid=0, jobs=1): err= 0: pid=1231424: Wed Apr 24 21:24:21 2024 00:19:06.807 read: IOPS=524, BW=2100KiB/s (2150kB/s)(2104KiB/1002msec) 00:19:06.807 slat (nsec): min=3456, max=42680, avg=15467.64, stdev=9535.62 00:19:06.807 clat (usec): min=213, max=42364, avg=1435.22, stdev=6458.71 00:19:06.807 lat (usec): min=218, max=42405, avg=1450.69, stdev=6462.04 00:19:06.807 clat percentiles (usec): 00:19:06.807 | 1.00th=[ 223], 5.00th=[ 237], 10.00th=[ 258], 20.00th=[ 297], 00:19:06.807 | 30.00th=[ 334], 40.00th=[ 367], 50.00th=[ 404], 60.00th=[ 449], 00:19:06.807 | 70.00th=[ 482], 80.00th=[ 523], 90.00th=[ 586], 95.00th=[ 635], 00:19:06.807 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:06.807 | 99.99th=[42206] 00:19:06.807 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:19:06.807 slat (nsec): min=5463, max=66031, avg=13339.23, stdev=11341.04 00:19:06.807 clat (usec): min=136, max=715, avg=214.37, stdev=65.44 00:19:06.807 lat (usec): min=141, max=760, avg=227.71, stdev=72.73 00:19:06.807 clat percentiles (usec): 00:19:06.807 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 169], 00:19:06.807 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 198], 00:19:06.807 | 70.00th=[ 235], 80.00th=[ 262], 90.00th=[ 297], 95.00th=[ 330], 00:19:06.807 | 99.00th=[ 449], 99.50th=[ 519], 99.90th=[ 586], 99.95th=[ 717], 00:19:06.807 | 99.99th=[ 717] 00:19:06.807 bw ( KiB/s): min= 8192, max= 8192, per=82.72%, avg=8192.00, stdev= 0.00, samples=1 00:19:06.807 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:06.807 lat (usec) : 250=54.39%, 500=36.90%, 750=7.87% 00:19:06.807 lat (msec) : 50=0.84% 00:19:06.807 cpu : usr=0.60%, sys=3.00%, ctx=1552, majf=0, minf=1 00:19:06.807 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.807 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.807 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.807 00:19:06.807 Run status group 0 (all jobs): 00:19:06.807 READ: bw=4186KiB/s (4286kB/s), 85.6KiB/s-2100KiB/s (87.7kB/s-2150kB/s), io=4328KiB (4432kB), run=1002-1034msec 00:19:06.807 WRITE: bw=9903KiB/s (10.1MB/s), 1981KiB/s-4088KiB/s (2028kB/s-4186kB/s), io=10.0MiB (10.5MB), run=1002-1034msec 00:19:06.807 00:19:06.807 Disk stats (read/write): 00:19:06.807 nvme0n1: ios=548/512, merge=0/0, ticks=709/94, in_queue=803, util=85.87% 00:19:06.807 nvme0n2: ios=68/512, merge=0/0, ticks=765/94, in_queue=859, util=89.68% 00:19:06.807 nvme0n3: ios=80/512, merge=0/0, ticks=1306/100, in_queue=1406, util=94.30% 00:19:06.807 nvme0n4: ios=542/1024, merge=0/0, ticks=1436/203, in_queue=1639, util=94.24% 00:19:06.807 21:24:21 -- target/fio.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:06.807 [global] 00:19:06.807 thread=1 00:19:06.807 invalidate=1 00:19:06.807 rw=randwrite 00:19:06.807 time_based=1 00:19:06.807 runtime=1 00:19:06.807 ioengine=libaio 00:19:06.807 direct=1 00:19:06.807 bs=4096 00:19:06.807 iodepth=1 00:19:06.807 norandommap=0 00:19:06.807 numjobs=1 00:19:06.807 00:19:06.807 verify_dump=1 00:19:06.807 verify_backlog=512 00:19:06.807 verify_state_save=0 00:19:06.807 do_verify=1 00:19:06.807 verify=crc32c-intel 00:19:06.807 [job0] 00:19:06.807 filename=/dev/nvme0n1 00:19:06.807 [job1] 00:19:06.807 filename=/dev/nvme0n2 00:19:06.807 [job2] 00:19:06.807 filename=/dev/nvme0n3 00:19:06.807 [job3] 00:19:06.807 filename=/dev/nvme0n4 00:19:06.807 Could not set queue depth (nvme0n1) 00:19:06.807 Could not set queue depth (nvme0n2) 00:19:06.807 Could not set queue depth (nvme0n3) 00:19:06.807 Could not set queue depth (nvme0n4) 00:19:07.066 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.066 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.066 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.066 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:07.066 fio-3.35 00:19:07.066 Starting 4 threads 00:19:08.443 00:19:08.443 job0: (groupid=0, jobs=1): err= 0: pid=1231901: Wed Apr 24 21:24:23 2024 00:19:08.443 read: IOPS=134, BW=539KiB/s (552kB/s)(556KiB/1032msec) 00:19:08.443 slat (nsec): min=3436, max=41976, avg=9189.46, stdev=10284.52 00:19:08.443 clat (usec): min=192, max=42056, avg=6804.84, stdev=15191.97 00:19:08.443 lat (usec): min=197, max=42071, avg=6814.03, stdev=15201.39 00:19:08.443 clat percentiles (usec): 00:19:08.443 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:19:08.443 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 247], 00:19:08.443 | 70.00th=[ 265], 80.00th=[ 289], 90.00th=[41681], 95.00th=[42206], 00:19:08.443 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:08.443 | 99.99th=[42206] 00:19:08.443 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:19:08.443 slat (nsec): min=4731, max=48195, avg=6232.27, stdev=2275.94 00:19:08.443 clat (usec): min=114, max=485, avg=155.95, stdev=25.98 00:19:08.443 lat (usec): min=120, max=534, avg=162.18, stdev=27.21 00:19:08.443 clat percentiles (usec): 00:19:08.443 | 1.00th=[ 121], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 139], 00:19:08.443 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 159], 00:19:08.443 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 192], 00:19:08.443 | 99.00th=[ 231], 99.50th=[ 258], 99.90th=[ 486], 99.95th=[ 486], 00:19:08.443 | 99.99th=[ 486] 00:19:08.443 bw ( KiB/s): min= 4096, max= 4096, per=41.28%, avg=4096.00, stdev= 0.00, samples=1 00:19:08.443 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:08.443 lat (usec) : 250=91.55%, 500=4.92%, 750=0.15% 00:19:08.443 lat (msec) : 50=3.38% 00:19:08.443 cpu : usr=0.39%, sys=0.10%, ctx=655, majf=0, minf=1 00:19:08.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.443 issued rwts: total=139,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.443 job1: (groupid=0, jobs=1): err= 0: pid=1231902: Wed Apr 24 21:24:23 2024 00:19:08.443 read: IOPS=520, BW=2083KiB/s (2133kB/s)(2108KiB/1012msec) 00:19:08.443 slat (nsec): min=2808, max=43061, avg=6060.63, stdev=5208.28 00:19:08.443 clat (usec): min=195, max=42490, avg=1528.36, stdev=7136.00 00:19:08.443 lat (usec): min=201, max=42495, avg=1534.42, stdev=7140.33 00:19:08.443 clat percentiles (usec): 00:19:08.443 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:19:08.443 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:19:08.443 | 70.00th=[ 255], 80.00th=[ 273], 90.00th=[ 314], 95.00th=[ 383], 00:19:08.443 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:19:08.443 | 99.99th=[42730] 00:19:08.443 write: IOPS=1011, BW=4047KiB/s (4145kB/s)(4096KiB/1012msec); 0 zone resets 00:19:08.443 slat (nsec): min=3500, max=48440, avg=6073.87, stdev=2127.54 00:19:08.443 clat (usec): min=119, max=616, avg=189.45, stdev=37.18 00:19:08.443 lat (usec): min=126, max=623, avg=195.52, stdev=38.07 00:19:08.443 clat percentiles (usec): 00:19:08.444 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 157], 00:19:08.444 | 30.00th=[ 172], 40.00th=[ 182], 50.00th=[ 192], 60.00th=[ 202], 00:19:08.444 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 229], 95.00th=[ 237], 00:19:08.444 | 99.00th=[ 262], 99.50th=[ 277], 99.90th=[ 562], 99.95th=[ 619], 00:19:08.444 | 99.99th=[ 619] 00:19:08.444 bw ( KiB/s): min= 8192, max= 8192, per=82.56%, avg=8192.00, stdev= 0.00, samples=1 00:19:08.444 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:08.444 lat (usec) : 250=86.59%, 500=11.99%, 750=0.32% 00:19:08.444 lat (msec) : 10=0.06%, 50=1.03% 00:19:08.444 cpu : usr=0.49%, sys=0.79%, ctx=1554, majf=0, minf=2 00:19:08.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.444 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.444 job2: (groupid=0, jobs=1): err= 0: pid=1231903: Wed Apr 24 21:24:23 2024 00:19:08.444 read: IOPS=406, BW=1625KiB/s (1663kB/s)(1644KiB/1012msec) 00:19:08.444 slat (nsec): min=4327, max=44002, avg=7447.69, stdev=5690.80 00:19:08.444 clat (usec): min=195, max=42176, avg=2174.26, stdev=8748.18 00:19:08.444 lat (usec): min=201, max=42208, avg=2181.71, stdev=8752.57 00:19:08.444 clat percentiles (usec): 00:19:08.444 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:19:08.444 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 255], 00:19:08.444 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 424], 00:19:08.444 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:08.444 | 99.99th=[42206] 00:19:08.444 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:19:08.444 slat (nsec): min=4766, max=46476, avg=6610.06, stdev=1991.53 00:19:08.444 clat (usec): min=142, max=365, avg=213.34, stdev=29.89 00:19:08.444 lat (usec): min=148, max=411, avg=219.95, stdev=30.44 00:19:08.444 clat percentiles (usec): 00:19:08.444 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 188], 00:19:08.444 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:19:08.444 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 269], 00:19:08.444 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 367], 99.95th=[ 367], 00:19:08.444 | 99.99th=[ 367] 00:19:08.444 bw ( KiB/s): min= 4096, max= 4096, per=41.28%, avg=4096.00, stdev= 0.00, samples=1 00:19:08.444 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:08.444 lat (usec) : 250=74.43%, 500=23.40%, 750=0.11% 00:19:08.444 lat (msec) : 50=2.06% 00:19:08.444 cpu : usr=0.30%, sys=0.69%, ctx=924, majf=0, minf=1 00:19:08.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.444 issued rwts: total=411,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.444 job3: (groupid=0, jobs=1): err= 0: pid=1231904: Wed Apr 24 21:24:23 2024 00:19:08.444 read: IOPS=458, BW=1833KiB/s (1877kB/s)(1892KiB/1032msec) 00:19:08.444 slat (nsec): min=2934, max=41085, avg=5526.10, stdev=5425.74 00:19:08.444 clat (usec): min=206, max=42039, avg=1934.40, stdev=8157.06 00:19:08.444 lat (usec): min=210, max=42070, avg=1939.92, stdev=8161.28 00:19:08.444 clat percentiles (usec): 00:19:08.444 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 239], 00:19:08.444 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 269], 00:19:08.444 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 343], 95.00th=[ 375], 00:19:08.444 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:08.444 | 99.99th=[42206] 00:19:08.444 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:19:08.444 slat (nsec): min=4643, max=48179, avg=6573.11, stdev=2303.46 00:19:08.444 clat (usec): min=145, max=483, avg=212.47, stdev=31.63 00:19:08.444 lat (usec): min=151, max=531, avg=219.04, stdev=32.46 00:19:08.444 clat percentiles (usec): 00:19:08.444 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 186], 00:19:08.444 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 219], 00:19:08.444 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 265], 00:19:08.444 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 482], 99.95th=[ 482], 00:19:08.444 | 99.99th=[ 482] 00:19:08.444 bw ( KiB/s): min= 4096, max= 4096, per=41.28%, avg=4096.00, stdev= 0.00, samples=1 00:19:08.444 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:08.444 lat (usec) : 250=64.37%, 500=33.60%, 750=0.10% 00:19:08.444 lat (msec) : 50=1.93% 00:19:08.444 cpu : usr=0.10%, sys=0.68%, ctx=985, majf=0, minf=1 00:19:08.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:08.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.444 issued rwts: total=473,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:08.444 00:19:08.444 Run status group 0 (all jobs): 00:19:08.444 READ: bw=6008KiB/s (6152kB/s), 539KiB/s-2083KiB/s (552kB/s-2133kB/s), io=6200KiB (6349kB), run=1012-1032msec 00:19:08.444 WRITE: bw=9922KiB/s (10.2MB/s), 1984KiB/s-4047KiB/s (2032kB/s-4145kB/s), io=10.0MiB (10.5MB), run=1012-1032msec 00:19:08.444 00:19:08.444 Disk stats (read/write): 00:19:08.444 nvme0n1: ios=166/512, merge=0/0, ticks=1639/78, in_queue=1717, util=99.50% 00:19:08.444 nvme0n2: ios=551/1024, merge=0/0, ticks=1357/195, in_queue=1552, util=98.58% 00:19:08.444 nvme0n3: ios=449/512, merge=0/0, ticks=1671/112, in_queue=1783, util=99.48% 00:19:08.444 nvme0n4: ios=468/512, merge=0/0, ticks=704/105, in_queue=809, util=89.80% 00:19:08.444 21:24:23 -- target/fio.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:08.444 [global] 00:19:08.444 thread=1 00:19:08.444 invalidate=1 00:19:08.444 rw=write 00:19:08.444 time_based=1 00:19:08.444 runtime=1 00:19:08.444 ioengine=libaio 00:19:08.444 direct=1 00:19:08.444 bs=4096 00:19:08.444 iodepth=128 00:19:08.444 norandommap=0 00:19:08.444 numjobs=1 00:19:08.444 00:19:08.444 verify_dump=1 00:19:08.444 verify_backlog=512 00:19:08.444 verify_state_save=0 00:19:08.444 do_verify=1 00:19:08.444 verify=crc32c-intel 00:19:08.444 [job0] 00:19:08.444 filename=/dev/nvme0n1 00:19:08.444 [job1] 00:19:08.444 filename=/dev/nvme0n2 00:19:08.444 [job2] 00:19:08.444 filename=/dev/nvme0n3 00:19:08.444 [job3] 00:19:08.444 filename=/dev/nvme0n4 00:19:08.444 Could not set queue depth (nvme0n1) 00:19:08.444 Could not set queue depth (nvme0n2) 00:19:08.444 Could not set queue depth (nvme0n3) 00:19:08.444 Could not set queue depth (nvme0n4) 00:19:08.703 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.703 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.703 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.703 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:08.703 fio-3.35 00:19:08.703 Starting 4 threads 00:19:10.076 00:19:10.076 job0: (groupid=0, jobs=1): err= 0: pid=1232377: Wed Apr 24 21:24:24 2024 00:19:10.076 read: IOPS=3524, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1017msec) 00:19:10.076 slat (nsec): min=950, max=12300k, avg=96187.14, stdev=676369.26 00:19:10.076 clat (usec): min=3949, max=25026, avg=11124.87, stdev=3761.90 00:19:10.076 lat (usec): min=3952, max=25033, avg=11221.06, stdev=3811.26 00:19:10.076 clat percentiles (usec): 00:19:10.076 | 1.00th=[ 4047], 5.00th=[ 7832], 10.00th=[ 8029], 20.00th=[ 8586], 00:19:10.076 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[10945], 00:19:10.076 | 70.00th=[12911], 80.00th=[14484], 90.00th=[16188], 95.00th=[18744], 00:19:10.076 | 99.00th=[22414], 99.50th=[22676], 99.90th=[24249], 99.95th=[24249], 00:19:10.076 | 99.99th=[25035] 00:19:10.076 write: IOPS=3852, BW=15.0MiB/s (15.8MB/s)(15.3MiB/1017msec); 0 zone resets 00:19:10.076 slat (nsec): min=1638, max=19641k, avg=164920.32, stdev=907102.29 00:19:10.076 clat (usec): min=1990, max=76674, avg=22687.66, stdev=16112.71 00:19:10.076 lat (usec): min=1995, max=76684, avg=22852.58, stdev=16202.05 00:19:10.076 clat percentiles (usec): 00:19:10.076 | 1.00th=[ 3032], 5.00th=[ 5014], 10.00th=[ 7308], 20.00th=[10945], 00:19:10.076 | 30.00th=[14877], 40.00th=[15664], 50.00th=[15926], 60.00th=[18744], 00:19:10.076 | 70.00th=[25035], 80.00th=[35390], 90.00th=[44303], 95.00th=[58459], 00:19:10.076 | 99.00th=[72877], 99.50th=[76022], 99.90th=[77071], 99.95th=[77071], 00:19:10.076 | 99.99th=[77071] 00:19:10.076 bw ( KiB/s): min=13936, max=16384, per=24.53%, avg=15160.00, stdev=1731.00, samples=2 00:19:10.076 iops : min= 3484, max= 4096, avg=3790.00, stdev=432.75, samples=2 00:19:10.076 lat (msec) : 2=0.07%, 4=0.96%, 10=36.23%, 20=40.74%, 50=18.26% 00:19:10.076 lat (msec) : 100=3.75% 00:19:10.076 cpu : usr=1.28%, sys=2.46%, ctx=460, majf=0, minf=1 00:19:10.076 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:10.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.076 issued rwts: total=3584,3918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.076 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.076 job1: (groupid=0, jobs=1): err= 0: pid=1232378: Wed Apr 24 21:24:24 2024 00:19:10.076 read: IOPS=4377, BW=17.1MiB/s (17.9MB/s)(17.9MiB/1049msec) 00:19:10.076 slat (nsec): min=879, max=13049k, avg=113586.69, stdev=766368.20 00:19:10.076 clat (usec): min=3003, max=57133, avg=13161.74, stdev=10283.97 00:19:10.076 lat (usec): min=3006, max=57910, avg=13275.33, stdev=10342.77 00:19:10.076 clat percentiles (usec): 00:19:10.076 | 1.00th=[ 5014], 5.00th=[ 7242], 10.00th=[ 7963], 20.00th=[ 8455], 00:19:10.076 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[10159], 00:19:10.076 | 70.00th=[12387], 80.00th=[14615], 90.00th=[20317], 95.00th=[45351], 00:19:10.076 | 99.00th=[54264], 99.50th=[55837], 99.90th=[56886], 99.95th=[56886], 00:19:10.076 | 99.99th=[56886] 00:19:10.076 write: IOPS=4392, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1049msec); 0 zone resets 00:19:10.076 slat (nsec): min=1588, max=15152k, avg=102525.88, stdev=569169.03 00:19:10.076 clat (usec): min=1063, max=57100, avg=15746.09, stdev=10119.80 00:19:10.076 lat (usec): min=1071, max=57104, avg=15848.62, stdev=10164.10 00:19:10.076 clat percentiles (usec): 00:19:10.076 | 1.00th=[ 2999], 5.00th=[ 5669], 10.00th=[ 7242], 20.00th=[ 7767], 00:19:10.076 | 30.00th=[ 8455], 40.00th=[11731], 50.00th=[14615], 60.00th=[15533], 00:19:10.076 | 70.00th=[15926], 80.00th=[19006], 90.00th=[34341], 95.00th=[41681], 00:19:10.076 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44827], 99.95th=[50070], 00:19:10.076 | 99.99th=[56886] 00:19:10.076 bw ( KiB/s): min=17232, max=19632, per=29.83%, avg=18432.00, stdev=1697.06, samples=2 00:19:10.076 iops : min= 4308, max= 4908, avg=4608.00, stdev=424.26, samples=2 00:19:10.076 lat (msec) : 2=0.18%, 4=1.22%, 10=45.45%, 20=38.35%, 50=13.78% 00:19:10.076 lat (msec) : 100=1.02% 00:19:10.076 cpu : usr=0.95%, sys=2.48%, ctx=523, majf=0, minf=1 00:19:10.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:10.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.077 issued rwts: total=4592,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.077 job2: (groupid=0, jobs=1): err= 0: pid=1232379: Wed Apr 24 21:24:24 2024 00:19:10.077 read: IOPS=3585, BW=14.0MiB/s (14.7MB/s)(14.2MiB/1017msec) 00:19:10.077 slat (nsec): min=1004, max=14462k, avg=126504.14, stdev=925317.35 00:19:10.077 clat (usec): min=3656, max=50267, avg=14821.77, stdev=6750.96 00:19:10.077 lat (usec): min=3659, max=50276, avg=14948.28, stdev=6821.42 00:19:10.077 clat percentiles (usec): 00:19:10.077 | 1.00th=[ 7963], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[ 9896], 00:19:10.077 | 30.00th=[10290], 40.00th=[10814], 50.00th=[12649], 60.00th=[15401], 00:19:10.077 | 70.00th=[17171], 80.00th=[17695], 90.00th=[22676], 95.00th=[24249], 00:19:10.077 | 99.00th=[45876], 99.50th=[47973], 99.90th=[50070], 99.95th=[50070], 00:19:10.077 | 99.99th=[50070] 00:19:10.077 write: IOPS=4027, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1017msec); 0 zone resets 00:19:10.077 slat (nsec): min=1596, max=12013k, avg=128203.55, stdev=695910.33 00:19:10.077 clat (usec): min=2002, max=84638, avg=18276.65, stdev=12393.78 00:19:10.077 lat (usec): min=2009, max=84647, avg=18404.85, stdev=12463.35 00:19:10.077 clat percentiles (usec): 00:19:10.077 | 1.00th=[ 3589], 5.00th=[ 5866], 10.00th=[ 7898], 20.00th=[ 9503], 00:19:10.077 | 30.00th=[12911], 40.00th=[15795], 50.00th=[17171], 60.00th=[17695], 00:19:10.077 | 70.00th=[18220], 80.00th=[19006], 90.00th=[30540], 95.00th=[41157], 00:19:10.077 | 99.00th=[78119], 99.50th=[80217], 99.90th=[84411], 99.95th=[84411], 00:19:10.077 | 99.99th=[84411] 00:19:10.077 bw ( KiB/s): min=15512, max=16736, per=26.09%, avg=16124.00, stdev=865.50, samples=2 00:19:10.077 iops : min= 3878, max= 4184, avg=4031.00, stdev=216.37, samples=2 00:19:10.077 lat (msec) : 4=0.65%, 10=22.63%, 20=60.91%, 50=13.89%, 100=1.92% 00:19:10.077 cpu : usr=2.56%, sys=2.07%, ctx=431, majf=0, minf=1 00:19:10.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:10.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.077 issued rwts: total=3646,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.077 job3: (groupid=0, jobs=1): err= 0: pid=1232380: Wed Apr 24 21:24:24 2024 00:19:10.077 read: IOPS=3147, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1011msec) 00:19:10.077 slat (nsec): min=966, max=24791k, avg=166904.31, stdev=1225058.15 00:19:10.077 clat (usec): min=3365, max=92385, avg=18813.83, stdev=13431.55 00:19:10.077 lat (usec): min=4830, max=92393, avg=18980.73, stdev=13535.80 00:19:10.077 clat percentiles (usec): 00:19:10.077 | 1.00th=[ 6325], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[10028], 00:19:10.077 | 30.00th=[10290], 40.00th=[11469], 50.00th=[14222], 60.00th=[17433], 00:19:10.077 | 70.00th=[19268], 80.00th=[24773], 90.00th=[33162], 95.00th=[47973], 00:19:10.077 | 99.00th=[81265], 99.50th=[83362], 99.90th=[92799], 99.95th=[92799], 00:19:10.077 | 99.99th=[92799] 00:19:10.077 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:19:10.077 slat (nsec): min=1691, max=18175k, avg=126520.41, stdev=721963.97 00:19:10.077 clat (usec): min=2529, max=92363, avg=19042.00, stdev=11315.43 00:19:10.077 lat (usec): min=2535, max=92369, avg=19168.52, stdev=11369.87 00:19:10.077 clat percentiles (usec): 00:19:10.077 | 1.00th=[ 3916], 5.00th=[ 6194], 10.00th=[ 8029], 20.00th=[12518], 00:19:10.077 | 30.00th=[16581], 40.00th=[17171], 50.00th=[17695], 60.00th=[17957], 00:19:10.077 | 70.00th=[18482], 80.00th=[21365], 90.00th=[29754], 95.00th=[34341], 00:19:10.077 | 99.00th=[79168], 99.50th=[85459], 99.90th=[85459], 99.95th=[92799], 00:19:10.077 | 99.99th=[92799] 00:19:10.077 bw ( KiB/s): min=12552, max=15984, per=23.09%, avg=14268.00, stdev=2426.79, samples=2 00:19:10.077 iops : min= 3138, max= 3996, avg=3567.00, stdev=606.70, samples=2 00:19:10.077 lat (msec) : 4=0.61%, 10=17.32%, 20=57.85%, 50=20.93%, 100=3.30% 00:19:10.077 cpu : usr=1.49%, sys=2.97%, ctx=411, majf=0, minf=1 00:19:10.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:10.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.077 issued rwts: total=3182,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.077 00:19:10.077 Run status group 0 (all jobs): 00:19:10.077 READ: bw=55.9MiB/s (58.6MB/s), 12.3MiB/s-17.1MiB/s (12.9MB/s-17.9MB/s), io=58.6MiB (61.5MB), run=1011-1049msec 00:19:10.077 WRITE: bw=60.3MiB/s (63.3MB/s), 13.8MiB/s-17.2MiB/s (14.5MB/s-18.0MB/s), io=63.3MiB (66.4MB), run=1011-1049msec 00:19:10.077 00:19:10.077 Disk stats (read/write): 00:19:10.077 nvme0n1: ios=3124/3239, merge=0/0, ticks=32640/64469, in_queue=97109, util=99.10% 00:19:10.077 nvme0n2: ios=3735/4096, merge=0/0, ticks=46550/60989, in_queue=107539, util=88.04% 00:19:10.077 nvme0n3: ios=3131/3535, merge=0/0, ticks=43320/63649, in_queue=106969, util=99.28% 00:19:10.077 nvme0n4: ios=2579/3063, merge=0/0, ticks=50129/56899, in_queue=107028, util=99.17% 00:19:10.077 21:24:24 -- target/fio.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:10.077 [global] 00:19:10.077 thread=1 00:19:10.077 invalidate=1 00:19:10.077 rw=randwrite 00:19:10.077 time_based=1 00:19:10.077 runtime=1 00:19:10.077 ioengine=libaio 00:19:10.077 direct=1 00:19:10.077 bs=4096 00:19:10.077 iodepth=128 00:19:10.077 norandommap=0 00:19:10.077 numjobs=1 00:19:10.077 00:19:10.077 verify_dump=1 00:19:10.077 verify_backlog=512 00:19:10.077 verify_state_save=0 00:19:10.077 do_verify=1 00:19:10.077 verify=crc32c-intel 00:19:10.077 [job0] 00:19:10.077 filename=/dev/nvme0n1 00:19:10.077 [job1] 00:19:10.077 filename=/dev/nvme0n2 00:19:10.077 [job2] 00:19:10.077 filename=/dev/nvme0n3 00:19:10.077 [job3] 00:19:10.077 filename=/dev/nvme0n4 00:19:10.077 Could not set queue depth (nvme0n1) 00:19:10.077 Could not set queue depth (nvme0n2) 00:19:10.077 Could not set queue depth (nvme0n3) 00:19:10.077 Could not set queue depth (nvme0n4) 00:19:10.334 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.334 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.334 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.334 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:10.334 fio-3.35 00:19:10.334 Starting 4 threads 00:19:11.710 00:19:11.710 job0: (groupid=0, jobs=1): err= 0: pid=1232862: Wed Apr 24 21:24:26 2024 00:19:11.710 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:19:11.710 slat (nsec): min=1075, max=24745k, avg=202901.17, stdev=1435328.83 00:19:11.710 clat (msec): min=3, max=102, avg=21.68, stdev=18.17 00:19:11.710 lat (msec): min=3, max=102, avg=21.88, stdev=18.32 00:19:11.710 clat percentiles (msec): 00:19:11.710 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:19:11.710 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 18], 00:19:11.710 | 70.00th=[ 21], 80.00th=[ 24], 90.00th=[ 48], 95.00th=[ 67], 00:19:11.710 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 104], 99.95th=[ 104], 00:19:11.710 | 99.99th=[ 104] 00:19:11.710 write: IOPS=3238, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1005msec); 0 zone resets 00:19:11.710 slat (nsec): min=1625, max=14588k, avg=107094.92, stdev=522705.31 00:19:11.710 clat (msec): min=2, max=102, avg=18.72, stdev=12.14 00:19:11.710 lat (msec): min=2, max=102, avg=18.83, stdev=12.17 00:19:11.710 clat percentiles (msec): 00:19:11.710 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 10], 00:19:11.710 | 30.00th=[ 12], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 21], 00:19:11.710 | 70.00th=[ 22], 80.00th=[ 23], 90.00th=[ 32], 95.00th=[ 45], 00:19:11.710 | 99.00th=[ 69], 99.50th=[ 81], 99.90th=[ 96], 99.95th=[ 103], 00:19:11.710 | 99.99th=[ 104] 00:19:11.710 bw ( KiB/s): min= 9296, max=15728, per=19.04%, avg=12512.00, stdev=4548.11, samples=2 00:19:11.710 iops : min= 2324, max= 3932, avg=3128.00, stdev=1137.03, samples=2 00:19:11.710 lat (msec) : 4=0.96%, 10=16.48%, 20=45.54%, 50=30.82%, 100=5.96% 00:19:11.710 lat (msec) : 250=0.24% 00:19:11.710 cpu : usr=1.10%, sys=3.09%, ctx=369, majf=0, minf=1 00:19:11.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.710 issued rwts: total=3072,3255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.710 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.710 job1: (groupid=0, jobs=1): err= 0: pid=1232867: Wed Apr 24 21:24:26 2024 00:19:11.710 read: IOPS=5510, BW=21.5MiB/s (22.6MB/s)(22.5MiB/1045msec) 00:19:11.710 slat (nsec): min=998, max=20216k, avg=81524.12, stdev=694064.52 00:19:11.710 clat (usec): min=1172, max=63022, avg=11422.14, stdev=8293.65 00:19:11.710 lat (usec): min=1180, max=63025, avg=11503.66, stdev=8333.35 00:19:11.710 clat percentiles (usec): 00:19:11.710 | 1.00th=[ 1680], 5.00th=[ 3687], 10.00th=[ 4948], 20.00th=[ 7963], 00:19:11.710 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[10290], 00:19:11.710 | 70.00th=[11469], 80.00th=[12649], 90.00th=[16319], 95.00th=[21627], 00:19:11.710 | 99.00th=[57934], 99.50th=[61080], 99.90th=[62129], 99.95th=[63177], 00:19:11.710 | 99.99th=[63177] 00:19:11.710 write: IOPS=5879, BW=23.0MiB/s (24.1MB/s)(24.0MiB/1045msec); 0 zone resets 00:19:11.710 slat (nsec): min=1565, max=17979k, avg=67421.29, stdev=516692.07 00:19:11.710 clat (usec): min=585, max=63025, avg=10879.44, stdev=6993.75 00:19:11.710 lat (usec): min=594, max=63028, avg=10946.86, stdev=7030.68 00:19:11.710 clat percentiles (usec): 00:19:11.710 | 1.00th=[ 1205], 5.00th=[ 2212], 10.00th=[ 3720], 20.00th=[ 5800], 00:19:11.710 | 30.00th=[ 7439], 40.00th=[ 8848], 50.00th=[ 9896], 60.00th=[10814], 00:19:11.710 | 70.00th=[11338], 80.00th=[13960], 90.00th=[19006], 95.00th=[22414], 00:19:11.710 | 99.00th=[35390], 99.50th=[44303], 99.90th=[52167], 99.95th=[52167], 00:19:11.710 | 99.99th=[63177] 00:19:11.710 bw ( KiB/s): min=24255, max=24840, per=37.35%, avg=24547.50, stdev=413.66, samples=2 00:19:11.710 iops : min= 6063, max= 6210, avg=6136.50, stdev=103.94, samples=2 00:19:11.710 lat (usec) : 750=0.06%, 1000=0.31% 00:19:11.710 lat (msec) : 2=2.54%, 4=5.89%, 10=42.46%, 20=41.35%, 50=6.23% 00:19:11.710 lat (msec) : 100=1.16% 00:19:11.710 cpu : usr=2.39%, sys=5.17%, ctx=568, majf=0, minf=1 00:19:11.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.710 issued rwts: total=5758,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.710 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.710 job2: (groupid=0, jobs=1): err= 0: pid=1232883: Wed Apr 24 21:24:26 2024 00:19:11.710 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:19:11.710 slat (nsec): min=922, max=14758k, avg=126997.27, stdev=930227.34 00:19:11.710 clat (usec): min=5076, max=32678, avg=15114.29, stdev=5015.61 00:19:11.710 lat (usec): min=5080, max=32681, avg=15241.29, stdev=5096.03 00:19:11.710 clat percentiles (usec): 00:19:11.710 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[11076], 00:19:11.710 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13173], 60.00th=[15139], 00:19:11.710 | 70.00th=[18482], 80.00th=[20055], 90.00th=[21103], 95.00th=[24249], 00:19:11.710 | 99.00th=[30278], 99.50th=[31589], 99.90th=[32637], 99.95th=[32637], 00:19:11.710 | 99.99th=[32637] 00:19:11.710 write: IOPS=3721, BW=14.5MiB/s (15.2MB/s)(14.7MiB/1012msec); 0 zone resets 00:19:11.710 slat (nsec): min=1623, max=15115k, avg=141185.67, stdev=804986.39 00:19:11.710 clat (usec): min=1087, max=73877, avg=19680.97, stdev=12358.54 00:19:11.710 lat (usec): min=1096, max=73887, avg=19822.16, stdev=12417.26 00:19:11.710 clat percentiles (usec): 00:19:11.710 | 1.00th=[ 2769], 5.00th=[ 6128], 10.00th=[ 7767], 20.00th=[10159], 00:19:11.710 | 30.00th=[12125], 40.00th=[15664], 50.00th=[17695], 60.00th=[21365], 00:19:11.710 | 70.00th=[22414], 80.00th=[22938], 90.00th=[32637], 95.00th=[47449], 00:19:11.710 | 99.00th=[69731], 99.50th=[71828], 99.90th=[73925], 99.95th=[73925], 00:19:11.710 | 99.99th=[73925] 00:19:11.710 bw ( KiB/s): min=12728, max=16384, per=22.15%, avg=14556.00, stdev=2585.18, samples=2 00:19:11.710 iops : min= 3182, max= 4096, avg=3639.00, stdev=646.30, samples=2 00:19:11.710 lat (msec) : 2=0.16%, 4=0.54%, 10=13.40%, 20=52.37%, 50=31.22% 00:19:11.710 lat (msec) : 100=2.30% 00:19:11.710 cpu : usr=1.98%, sys=2.67%, ctx=377, majf=0, minf=1 00:19:11.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:11.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.710 issued rwts: total=3584,3766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.710 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.710 job3: (groupid=0, jobs=1): err= 0: pid=1232890: Wed Apr 24 21:24:26 2024 00:19:11.710 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:19:11.710 slat (nsec): min=847, max=15710k, avg=120312.74, stdev=793905.79 00:19:11.711 clat (usec): min=7650, max=48853, avg=15073.69, stdev=6662.43 00:19:11.711 lat (usec): min=7655, max=53409, avg=15194.01, stdev=6742.01 00:19:11.711 clat percentiles (usec): 00:19:11.711 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11338], 00:19:11.711 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[13042], 00:19:11.711 | 70.00th=[14353], 80.00th=[16909], 90.00th=[24511], 95.00th=[30278], 00:19:11.711 | 99.00th=[45351], 99.50th=[47449], 99.90th=[49021], 99.95th=[49021], 00:19:11.711 | 99.99th=[49021] 00:19:11.711 write: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1001msec); 0 zone resets 00:19:11.711 slat (nsec): min=1491, max=9674.0k, avg=138054.23, stdev=716966.85 00:19:11.711 clat (usec): min=400, max=64391, avg=18111.76, stdev=11443.40 00:19:11.711 lat (usec): min=907, max=64408, avg=18249.82, stdev=11502.97 00:19:11.711 clat percentiles (usec): 00:19:11.711 | 1.00th=[ 5932], 5.00th=[ 8717], 10.00th=[10421], 20.00th=[11207], 00:19:11.711 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[14353], 00:19:11.711 | 70.00th=[19006], 80.00th=[25297], 90.00th=[33162], 95.00th=[45876], 00:19:11.711 | 99.00th=[58459], 99.50th=[59507], 99.90th=[64226], 99.95th=[64226], 00:19:11.711 | 99.99th=[64226] 00:19:11.711 bw ( KiB/s): min=12288, max=12288, per=18.70%, avg=12288.00, stdev= 0.00, samples=1 00:19:11.711 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:19:11.711 lat (usec) : 500=0.01%, 1000=0.03% 00:19:11.711 lat (msec) : 10=6.76%, 20=71.49%, 50=19.77%, 100=1.94% 00:19:11.711 cpu : usr=1.60%, sys=2.60%, ctx=438, majf=0, minf=1 00:19:11.711 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:11.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:11.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:11.711 issued rwts: total=3584,4004,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:11.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:11.711 00:19:11.711 Run status group 0 (all jobs): 00:19:11.711 READ: bw=59.8MiB/s (62.7MB/s), 11.9MiB/s-21.5MiB/s (12.5MB/s-22.6MB/s), io=62.5MiB (65.5MB), run=1001-1045msec 00:19:11.711 WRITE: bw=64.2MiB/s (67.3MB/s), 12.7MiB/s-23.0MiB/s (13.3MB/s-24.1MB/s), io=67.1MiB (70.3MB), run=1001-1045msec 00:19:11.711 00:19:11.711 Disk stats (read/write): 00:19:11.711 nvme0n1: ios=2610/2815, merge=0/0, ticks=54391/51596, in_queue=105987, util=87.68% 00:19:11.711 nvme0n2: ios=4999/5120, merge=0/0, ticks=47897/52894, in_queue=100791, util=96.05% 00:19:11.711 nvme0n3: ios=3112/3375, merge=0/0, ticks=45091/61379, in_queue=106470, util=97.93% 00:19:11.711 nvme0n4: ios=2909/3072, merge=0/0, ticks=25817/30989, in_queue=56806, util=89.67% 00:19:11.711 21:24:26 -- target/fio.sh@55 -- # sync 00:19:11.711 21:24:26 -- target/fio.sh@59 -- # fio_pid=1233143 00:19:11.711 21:24:26 -- target/fio.sh@61 -- # sleep 3 00:19:11.711 21:24:26 -- target/fio.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:11.711 [global] 00:19:11.711 thread=1 00:19:11.711 invalidate=1 00:19:11.711 rw=read 00:19:11.711 time_based=1 00:19:11.711 runtime=10 00:19:11.711 ioengine=libaio 00:19:11.711 direct=1 00:19:11.711 bs=4096 00:19:11.711 iodepth=1 00:19:11.711 norandommap=1 00:19:11.711 numjobs=1 00:19:11.711 00:19:11.711 [job0] 00:19:11.711 filename=/dev/nvme0n1 00:19:11.711 [job1] 00:19:11.711 filename=/dev/nvme0n2 00:19:11.711 [job2] 00:19:11.711 filename=/dev/nvme0n3 00:19:11.711 [job3] 00:19:11.711 filename=/dev/nvme0n4 00:19:11.711 Could not set queue depth (nvme0n1) 00:19:11.711 Could not set queue depth (nvme0n2) 00:19:11.711 Could not set queue depth (nvme0n3) 00:19:11.711 Could not set queue depth (nvme0n4) 00:19:11.969 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.969 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.969 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.969 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:11.969 fio-3.35 00:19:11.969 Starting 4 threads 00:19:14.600 21:24:29 -- target/fio.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:14.859 21:24:29 -- target/fio.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:14.859 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=266240, buflen=4096 00:19:14.859 fio: pid=1233497, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:14.859 21:24:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:14.859 21:24:29 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:14.859 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=7364608, buflen=4096 00:19:14.859 fio: pid=1233490, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:15.117 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=6139904, buflen=4096 00:19:15.117 fio: pid=1233458, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:15.117 21:24:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.117 21:24:29 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:15.377 21:24:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.377 21:24:30 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:15.377 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=8130560, buflen=4096 00:19:15.377 fio: pid=1233471, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:15.377 00:19:15.377 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1233458: Wed Apr 24 21:24:30 2024 00:19:15.377 read: IOPS=521, BW=2083KiB/s (2133kB/s)(5996KiB/2878msec) 00:19:15.377 slat (usec): min=3, max=29691, avg=33.58, stdev=792.29 00:19:15.377 clat (usec): min=211, max=42393, avg=1884.05, stdev=7933.18 00:19:15.377 lat (usec): min=218, max=42402, avg=1912.44, stdev=7970.35 00:19:15.377 clat percentiles (usec): 00:19:15.377 | 1.00th=[ 235], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 260], 00:19:15.377 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 302], 00:19:15.377 | 70.00th=[ 330], 80.00th=[ 375], 90.00th=[ 408], 95.00th=[ 498], 00:19:15.377 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:15.377 | 99.99th=[42206] 00:19:15.377 bw ( KiB/s): min= 96, max= 5168, per=33.58%, avg=2337.60, stdev=2014.88, samples=5 00:19:15.377 iops : min= 24, max= 1292, avg=584.40, stdev=503.72, samples=5 00:19:15.377 lat (usec) : 250=7.73%, 500=87.27%, 750=1.00%, 1000=0.13% 00:19:15.377 lat (msec) : 50=3.80% 00:19:15.377 cpu : usr=0.17%, sys=0.76%, ctx=1502, majf=0, minf=1 00:19:15.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.377 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.377 issued rwts: total=1500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.377 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1233471: Wed Apr 24 21:24:30 2024 00:19:15.377 read: IOPS=646, BW=2584KiB/s (2646kB/s)(7940KiB/3073msec) 00:19:15.377 slat (usec): min=3, max=11747, avg=27.85, stdev=454.40 00:19:15.377 clat (usec): min=187, max=45014, avg=1506.96, stdev=6971.42 00:19:15.377 lat (usec): min=192, max=45042, avg=1534.82, stdev=6986.23 00:19:15.377 clat percentiles (usec): 00:19:15.377 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 239], 00:19:15.377 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 289], 00:19:15.377 | 70.00th=[ 318], 80.00th=[ 371], 90.00th=[ 404], 95.00th=[ 486], 00:19:15.377 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[44827], 00:19:15.377 | 99.99th=[44827] 00:19:15.377 bw ( KiB/s): min= 96, max= 4112, per=36.34%, avg=2529.60, stdev=1652.64, samples=5 00:19:15.377 iops : min= 24, max= 1028, avg=632.40, stdev=413.16, samples=5 00:19:15.377 lat (usec) : 250=24.47%, 500=70.85%, 750=1.56%, 1000=0.10% 00:19:15.377 lat (msec) : 10=0.05%, 50=2.92% 00:19:15.377 cpu : usr=0.16%, sys=0.62%, ctx=1993, majf=0, minf=1 00:19:15.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.377 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.377 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.377 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1233490: Wed Apr 24 21:24:30 2024 00:19:15.377 read: IOPS=658, BW=2633KiB/s (2697kB/s)(7192KiB/2731msec) 00:19:15.377 slat (nsec): min=4091, max=36287, avg=8759.97, stdev=6140.81 00:19:15.377 clat (usec): min=209, max=42697, avg=1495.62, stdev=6886.05 00:19:15.377 lat (usec): min=217, max=42704, avg=1504.37, stdev=6888.33 00:19:15.377 clat percentiles (usec): 00:19:15.377 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 262], 00:19:15.377 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 306], 00:19:15.377 | 70.00th=[ 343], 80.00th=[ 375], 90.00th=[ 412], 95.00th=[ 498], 00:19:15.377 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:19:15.377 | 99.99th=[42730] 00:19:15.377 bw ( KiB/s): min= 96, max= 6048, per=41.21%, avg=2868.80, stdev=2496.28, samples=5 00:19:15.377 iops : min= 24, max= 1512, avg=717.20, stdev=624.07, samples=5 00:19:15.377 lat (usec) : 250=6.95%, 500=88.33%, 750=1.72%, 1000=0.06% 00:19:15.377 lat (msec) : 20=0.06%, 50=2.83% 00:19:15.377 cpu : usr=0.15%, sys=0.77%, ctx=1799, majf=0, minf=1 00:19:15.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.377 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.377 issued rwts: total=1799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.377 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1233497: Wed Apr 24 21:24:30 2024 00:19:15.377 read: IOPS=25, BW=101KiB/s (103kB/s)(260KiB/2575msec) 00:19:15.377 slat (nsec): min=4223, max=39909, avg=30585.95, stdev=7232.95 00:19:15.377 clat (usec): min=289, max=42108, avg=39191.48, stdev=9762.89 00:19:15.377 lat (usec): min=295, max=42127, avg=39222.03, stdev=9766.43 00:19:15.377 clat percentiles (usec): 00:19:15.377 | 1.00th=[ 289], 5.00th=[ 4015], 10.00th=[40633], 20.00th=[41157], 00:19:15.377 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:19:15.377 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:15.377 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:15.377 | 99.99th=[42206] 00:19:15.377 bw ( KiB/s): min= 96, max= 120, per=1.44%, avg=100.80, stdev=10.73, samples=5 00:19:15.377 iops : min= 24, max= 30, avg=25.20, stdev= 2.68, samples=5 00:19:15.377 lat (usec) : 500=1.52%, 750=1.52%, 1000=1.52% 00:19:15.377 lat (msec) : 10=1.52%, 50=92.42% 00:19:15.377 cpu : usr=0.12%, sys=0.00%, ctx=68, majf=0, minf=2 00:19:15.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.377 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.377 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.377 00:19:15.377 Run status group 0 (all jobs): 00:19:15.377 READ: bw=6960KiB/s (7127kB/s), 101KiB/s-2633KiB/s (103kB/s-2697kB/s), io=20.9MiB (21.9MB), run=2575-3073msec 00:19:15.377 00:19:15.377 Disk stats (read/write): 00:19:15.377 nvme0n1: ios=1534/0, merge=0/0, ticks=3764/0, in_queue=3764, util=98.53% 00:19:15.377 nvme0n2: ios=1906/0, merge=0/0, ticks=2796/0, in_queue=2796, util=94.69% 00:19:15.377 nvme0n3: ios=1795/0, merge=0/0, ticks=2558/0, in_queue=2558, util=96.07% 00:19:15.377 nvme0n4: ios=93/0, merge=0/0, ticks=3074/0, in_queue=3074, util=99.26% 00:19:15.377 21:24:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.377 21:24:30 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:15.637 21:24:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.637 21:24:30 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:15.896 21:24:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.896 21:24:30 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:15.896 21:24:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:15.896 21:24:30 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:16.154 21:24:30 -- target/fio.sh@69 -- # fio_status=0 00:19:16.154 21:24:30 -- target/fio.sh@70 -- # wait 1233143 00:19:16.154 21:24:30 -- target/fio.sh@70 -- # fio_status=4 00:19:16.154 21:24:30 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:16.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:16.412 21:24:31 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:16.412 21:24:31 -- common/autotest_common.sh@1205 -- # local i=0 00:19:16.412 21:24:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:16.412 21:24:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:16.412 21:24:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:16.412 21:24:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:16.412 21:24:31 -- common/autotest_common.sh@1217 -- # return 0 00:19:16.412 21:24:31 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:16.412 21:24:31 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:16.412 nvmf hotplug test: fio failed as expected 00:19:16.412 21:24:31 -- target/fio.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.670 21:24:31 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:16.670 21:24:31 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:16.670 21:24:31 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:16.670 21:24:31 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:16.670 21:24:31 -- target/fio.sh@91 -- # nvmftestfini 00:19:16.670 21:24:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:16.670 21:24:31 -- nvmf/common.sh@117 -- # sync 00:19:16.670 21:24:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.670 21:24:31 -- nvmf/common.sh@120 -- # set +e 00:19:16.670 21:24:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.670 21:24:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.670 rmmod nvme_tcp 00:19:16.670 rmmod nvme_fabrics 00:19:16.670 rmmod nvme_keyring 00:19:16.670 21:24:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.670 21:24:31 -- nvmf/common.sh@124 -- # set -e 00:19:16.670 21:24:31 -- nvmf/common.sh@125 -- # return 0 00:19:16.670 21:24:31 -- nvmf/common.sh@478 -- # '[' -n 1229733 ']' 00:19:16.670 21:24:31 -- nvmf/common.sh@479 -- # killprocess 1229733 00:19:16.670 21:24:31 -- common/autotest_common.sh@936 -- # '[' -z 1229733 ']' 00:19:16.670 21:24:31 -- common/autotest_common.sh@940 -- # kill -0 1229733 00:19:16.670 21:24:31 -- common/autotest_common.sh@941 -- # uname 00:19:16.670 21:24:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:16.670 21:24:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1229733 00:19:16.670 21:24:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:16.670 21:24:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:16.670 21:24:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1229733' 00:19:16.670 killing process with pid 1229733 00:19:16.670 21:24:31 -- common/autotest_common.sh@955 -- # kill 1229733 00:19:16.670 21:24:31 -- common/autotest_common.sh@960 -- # wait 1229733 00:19:17.238 21:24:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:17.238 21:24:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:17.238 21:24:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:17.238 21:24:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.238 21:24:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:17.238 21:24:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.238 21:24:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.238 21:24:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.766 21:24:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:19.766 00:19:19.766 real 0m26.774s 00:19:19.766 user 2m47.204s 00:19:19.766 sys 0m6.723s 00:19:19.766 21:24:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:19.766 21:24:34 -- common/autotest_common.sh@10 -- # set +x 00:19:19.766 ************************************ 00:19:19.766 END TEST nvmf_fio_target 00:19:19.766 ************************************ 00:19:19.766 21:24:34 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:19.766 21:24:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:19.766 21:24:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:19.766 21:24:34 -- common/autotest_common.sh@10 -- # set +x 00:19:19.766 ************************************ 00:19:19.766 START TEST nvmf_bdevio 00:19:19.766 ************************************ 00:19:19.766 21:24:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:19.766 * Looking for test storage... 00:19:19.766 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:19.766 21:24:34 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.766 21:24:34 -- nvmf/common.sh@7 -- # uname -s 00:19:19.766 21:24:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.766 21:24:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.766 21:24:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.766 21:24:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.766 21:24:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.766 21:24:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.766 21:24:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.766 21:24:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.766 21:24:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.766 21:24:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.766 21:24:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:19:19.766 21:24:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:19:19.767 21:24:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.767 21:24:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.767 21:24:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:19.767 21:24:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.767 21:24:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:19.767 21:24:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.767 21:24:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.767 21:24:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.767 21:24:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.767 21:24:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.767 21:24:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.767 21:24:34 -- paths/export.sh@5 -- # export PATH 00:19:19.767 21:24:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.767 21:24:34 -- nvmf/common.sh@47 -- # : 0 00:19:19.767 21:24:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:19.767 21:24:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:19.767 21:24:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.767 21:24:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.767 21:24:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.767 21:24:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:19.767 21:24:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:19.767 21:24:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:19.767 21:24:34 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:19.767 21:24:34 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:19.767 21:24:34 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:19.767 21:24:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:19.767 21:24:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.767 21:24:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:19.767 21:24:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:19.767 21:24:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:19.767 21:24:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.767 21:24:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.767 21:24:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.767 21:24:34 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:19:19.767 21:24:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:19.767 21:24:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:19.767 21:24:34 -- common/autotest_common.sh@10 -- # set +x 00:19:25.039 21:24:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:25.039 21:24:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:25.039 21:24:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:25.039 21:24:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:25.039 21:24:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:25.039 21:24:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:25.039 21:24:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:25.039 21:24:39 -- nvmf/common.sh@295 -- # net_devs=() 00:19:25.039 21:24:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:25.039 21:24:39 -- nvmf/common.sh@296 -- # e810=() 00:19:25.039 21:24:39 -- nvmf/common.sh@296 -- # local -ga e810 00:19:25.039 21:24:39 -- nvmf/common.sh@297 -- # x722=() 00:19:25.039 21:24:39 -- nvmf/common.sh@297 -- # local -ga x722 00:19:25.039 21:24:39 -- nvmf/common.sh@298 -- # mlx=() 00:19:25.039 21:24:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:25.039 21:24:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.039 21:24:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.039 21:24:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.039 21:24:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.039 21:24:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.039 21:24:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.039 21:24:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.039 21:24:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.039 21:24:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.039 21:24:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.039 21:24:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.039 21:24:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:25.039 21:24:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:25.039 21:24:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:25.039 21:24:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:25.039 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:25.039 21:24:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:25.039 21:24:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:25.039 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:25.039 21:24:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:25.039 21:24:39 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:25.039 21:24:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.039 21:24:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:25.039 21:24:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.039 21:24:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:25.039 Found net devices under 0000:27:00.0: cvl_0_0 00:19:25.039 21:24:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.039 21:24:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:25.039 21:24:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.039 21:24:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:25.039 21:24:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.039 21:24:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:25.039 Found net devices under 0000:27:00.1: cvl_0_1 00:19:25.039 21:24:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.039 21:24:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:25.039 21:24:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:25.039 21:24:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:25.039 21:24:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:25.039 21:24:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.039 21:24:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.039 21:24:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.039 21:24:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:25.039 21:24:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:25.039 21:24:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:25.039 21:24:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:25.039 21:24:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:25.039 21:24:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.039 21:24:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:25.039 21:24:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:25.039 21:24:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:25.039 21:24:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:25.039 21:24:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:25.039 21:24:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:25.039 21:24:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:25.039 21:24:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:25.039 21:24:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:25.039 21:24:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:25.039 21:24:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:25.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:19:25.039 00:19:25.039 --- 10.0.0.2 ping statistics --- 00:19:25.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.039 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:19:25.039 21:24:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:25.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:19:25.039 00:19:25.039 --- 10.0.0.1 ping statistics --- 00:19:25.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.039 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:19:25.039 21:24:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.039 21:24:39 -- nvmf/common.sh@411 -- # return 0 00:19:25.039 21:24:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:25.039 21:24:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.040 21:24:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:25.040 21:24:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:25.040 21:24:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.040 21:24:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:25.040 21:24:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:25.040 21:24:39 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:25.040 21:24:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:25.040 21:24:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:25.040 21:24:39 -- common/autotest_common.sh@10 -- # set +x 00:19:25.040 21:24:39 -- nvmf/common.sh@470 -- # nvmfpid=1238420 00:19:25.040 21:24:39 -- nvmf/common.sh@471 -- # waitforlisten 1238420 00:19:25.040 21:24:39 -- common/autotest_common.sh@817 -- # '[' -z 1238420 ']' 00:19:25.040 21:24:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.040 21:24:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:25.040 21:24:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.040 21:24:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:25.040 21:24:39 -- common/autotest_common.sh@10 -- # set +x 00:19:25.040 21:24:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:25.040 [2024-04-24 21:24:39.644665] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:19:25.040 [2024-04-24 21:24:39.644769] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.040 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.040 [2024-04-24 21:24:39.763328] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:25.040 [2024-04-24 21:24:39.855996] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.040 [2024-04-24 21:24:39.856030] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.040 [2024-04-24 21:24:39.856043] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.040 [2024-04-24 21:24:39.856052] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.040 [2024-04-24 21:24:39.856059] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.040 [2024-04-24 21:24:39.856292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:25.040 [2024-04-24 21:24:39.856415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:25.040 [2024-04-24 21:24:39.856514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:25.040 [2024-04-24 21:24:39.856544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:25.609 21:24:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:25.609 21:24:40 -- common/autotest_common.sh@850 -- # return 0 00:19:25.609 21:24:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:25.609 21:24:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:25.609 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.609 21:24:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.609 21:24:40 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:25.609 21:24:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.609 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.609 [2024-04-24 21:24:40.391475] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.609 21:24:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.609 21:24:40 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:25.609 21:24:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.609 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.609 Malloc0 00:19:25.609 21:24:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.609 21:24:40 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:25.609 21:24:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.609 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.609 21:24:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.609 21:24:40 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:25.609 21:24:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.609 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.609 21:24:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.609 21:24:40 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.609 21:24:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.609 21:24:40 -- common/autotest_common.sh@10 -- # set +x 00:19:25.609 [2024-04-24 21:24:40.460550] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.609 21:24:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.609 21:24:40 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:25.609 21:24:40 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:25.609 21:24:40 -- nvmf/common.sh@521 -- # config=() 00:19:25.609 21:24:40 -- nvmf/common.sh@521 -- # local subsystem config 00:19:25.609 21:24:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:25.609 21:24:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:25.609 { 00:19:25.609 "params": { 00:19:25.609 "name": "Nvme$subsystem", 00:19:25.609 "trtype": "$TEST_TRANSPORT", 00:19:25.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.609 "adrfam": "ipv4", 00:19:25.609 "trsvcid": "$NVMF_PORT", 00:19:25.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.609 "hdgst": ${hdgst:-false}, 00:19:25.609 "ddgst": ${ddgst:-false} 00:19:25.609 }, 00:19:25.609 "method": "bdev_nvme_attach_controller" 00:19:25.609 } 00:19:25.609 EOF 00:19:25.609 )") 00:19:25.609 21:24:40 -- nvmf/common.sh@543 -- # cat 00:19:25.609 21:24:40 -- nvmf/common.sh@545 -- # jq . 00:19:25.609 21:24:40 -- nvmf/common.sh@546 -- # IFS=, 00:19:25.609 21:24:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:25.609 "params": { 00:19:25.609 "name": "Nvme1", 00:19:25.609 "trtype": "tcp", 00:19:25.609 "traddr": "10.0.0.2", 00:19:25.609 "adrfam": "ipv4", 00:19:25.609 "trsvcid": "4420", 00:19:25.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.609 "hdgst": false, 00:19:25.609 "ddgst": false 00:19:25.609 }, 00:19:25.609 "method": "bdev_nvme_attach_controller" 00:19:25.609 }' 00:19:25.609 [2024-04-24 21:24:40.536506] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:19:25.609 [2024-04-24 21:24:40.536611] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238480 ] 00:19:25.868 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.868 [2024-04-24 21:24:40.652186] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:25.868 [2024-04-24 21:24:40.745849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.868 [2024-04-24 21:24:40.745956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.868 [2024-04-24 21:24:40.745962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.129 I/O targets: 00:19:26.129 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:26.129 00:19:26.129 00:19:26.129 CUnit - A unit testing framework for C - Version 2.1-3 00:19:26.129 http://cunit.sourceforge.net/ 00:19:26.129 00:19:26.129 00:19:26.129 Suite: bdevio tests on: Nvme1n1 00:19:26.129 Test: blockdev write read block ...passed 00:19:26.129 Test: blockdev write zeroes read block ...passed 00:19:26.129 Test: blockdev write zeroes read no split ...passed 00:19:26.388 Test: blockdev write zeroes read split ...passed 00:19:26.388 Test: blockdev write zeroes read split partial ...passed 00:19:26.388 Test: blockdev reset ...[2024-04-24 21:24:41.191058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.388 [2024-04-24 21:24:41.191151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:19:26.388 [2024-04-24 21:24:41.247624] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:26.388 passed 00:19:26.388 Test: blockdev write read 8 blocks ...passed 00:19:26.388 Test: blockdev write read size > 128k ...passed 00:19:26.388 Test: blockdev write read invalid size ...passed 00:19:26.388 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:26.388 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:26.388 Test: blockdev write read max offset ...passed 00:19:26.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:26.646 Test: blockdev writev readv 8 blocks ...passed 00:19:26.646 Test: blockdev writev readv 30 x 1block ...passed 00:19:26.646 Test: blockdev writev readv block ...passed 00:19:26.646 Test: blockdev writev readv size > 128k ...passed 00:19:26.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:26.646 Test: blockdev comparev and writev ...[2024-04-24 21:24:41.424397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.646 [2024-04-24 21:24:41.424435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:26.646 [2024-04-24 21:24:41.424452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.646 [2024-04-24 21:24:41.424462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:26.646 [2024-04-24 21:24:41.424821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.646 [2024-04-24 21:24:41.424830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:26.646 [2024-04-24 21:24:41.424845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.646 [2024-04-24 21:24:41.424853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:26.646 [2024-04-24 21:24:41.425192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.646 [2024-04-24 21:24:41.425202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:26.646 [2024-04-24 21:24:41.425216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.646 [2024-04-24 21:24:41.425230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:26.646 [2024-04-24 21:24:41.425565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.646 [2024-04-24 21:24:41.425574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:26.646 [2024-04-24 21:24:41.425587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:26.646 [2024-04-24 21:24:41.425595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:26.646 passed 00:19:26.646 Test: blockdev nvme passthru rw ...passed 00:19:26.646 Test: blockdev nvme passthru vendor specific ...[2024-04-24 21:24:41.509726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:26.646 [2024-04-24 21:24:41.509750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:26.646 [2024-04-24 21:24:41.509941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:26.646 [2024-04-24 21:24:41.509950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:26.646 [2024-04-24 21:24:41.510133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:26.646 [2024-04-24 21:24:41.510141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:26.646 [2024-04-24 21:24:41.510320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:26.646 [2024-04-24 21:24:41.510330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:26.646 passed 00:19:26.646 Test: blockdev nvme admin passthru ...passed 00:19:26.646 Test: blockdev copy ...passed 00:19:26.646 00:19:26.646 Run Summary: Type Total Ran Passed Failed Inactive 00:19:26.646 suites 1 1 n/a 0 0 00:19:26.646 tests 23 23 23 0 0 00:19:26.646 asserts 152 152 152 0 n/a 00:19:26.646 00:19:26.646 Elapsed time = 1.202 seconds 00:19:27.213 21:24:41 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.213 21:24:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.213 21:24:41 -- common/autotest_common.sh@10 -- # set +x 00:19:27.213 21:24:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.213 21:24:41 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:27.213 21:24:41 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:27.213 21:24:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:27.213 21:24:41 -- nvmf/common.sh@117 -- # sync 00:19:27.213 21:24:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:27.213 21:24:41 -- nvmf/common.sh@120 -- # set +e 00:19:27.213 21:24:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:27.213 21:24:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:27.213 rmmod nvme_tcp 00:19:27.213 rmmod nvme_fabrics 00:19:27.213 rmmod nvme_keyring 00:19:27.213 21:24:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:27.213 21:24:41 -- nvmf/common.sh@124 -- # set -e 00:19:27.213 21:24:41 -- nvmf/common.sh@125 -- # return 0 00:19:27.213 21:24:41 -- nvmf/common.sh@478 -- # '[' -n 1238420 ']' 00:19:27.213 21:24:41 -- nvmf/common.sh@479 -- # killprocess 1238420 00:19:27.213 21:24:41 -- common/autotest_common.sh@936 -- # '[' -z 1238420 ']' 00:19:27.213 21:24:41 -- common/autotest_common.sh@940 -- # kill -0 1238420 00:19:27.213 21:24:41 -- common/autotest_common.sh@941 -- # uname 00:19:27.213 21:24:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:27.213 21:24:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1238420 00:19:27.213 21:24:42 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:27.213 21:24:42 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:27.213 21:24:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1238420' 00:19:27.213 killing process with pid 1238420 00:19:27.213 21:24:42 -- common/autotest_common.sh@955 -- # kill 1238420 00:19:27.213 21:24:42 -- common/autotest_common.sh@960 -- # wait 1238420 00:19:27.782 21:24:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:27.782 21:24:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:27.782 21:24:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:27.782 21:24:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:27.782 21:24:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:27.782 21:24:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.782 21:24:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.782 21:24:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.691 21:24:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:29.691 00:19:29.691 real 0m10.343s 00:19:29.691 user 0m14.381s 00:19:29.691 sys 0m4.440s 00:19:29.691 21:24:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:29.691 21:24:44 -- common/autotest_common.sh@10 -- # set +x 00:19:29.691 ************************************ 00:19:29.691 END TEST nvmf_bdevio 00:19:29.691 ************************************ 00:19:29.951 21:24:44 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:19:29.951 21:24:44 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:29.951 21:24:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:29.951 21:24:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:29.951 21:24:44 -- common/autotest_common.sh@10 -- # set +x 00:19:29.951 ************************************ 00:19:29.951 START TEST nvmf_bdevio_no_huge 00:19:29.951 ************************************ 00:19:29.951 21:24:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:29.951 * Looking for test storage... 00:19:29.951 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:29.951 21:24:44 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:29.951 21:24:44 -- nvmf/common.sh@7 -- # uname -s 00:19:29.951 21:24:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.951 21:24:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.951 21:24:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.951 21:24:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.951 21:24:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.951 21:24:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.951 21:24:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.951 21:24:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.951 21:24:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.951 21:24:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.951 21:24:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:19:29.951 21:24:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:19:29.951 21:24:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.951 21:24:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.951 21:24:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:29.951 21:24:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.951 21:24:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:29.951 21:24:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.951 21:24:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.951 21:24:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.951 21:24:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.951 21:24:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.951 21:24:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.951 21:24:44 -- paths/export.sh@5 -- # export PATH 00:19:29.951 21:24:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.951 21:24:44 -- nvmf/common.sh@47 -- # : 0 00:19:29.951 21:24:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:29.951 21:24:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:29.951 21:24:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.951 21:24:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.951 21:24:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.951 21:24:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:29.951 21:24:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:29.951 21:24:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:29.951 21:24:44 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:29.951 21:24:44 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:29.951 21:24:44 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:29.951 21:24:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:29.951 21:24:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.951 21:24:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:29.951 21:24:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:29.951 21:24:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:29.951 21:24:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.951 21:24:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.951 21:24:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.951 21:24:44 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:19:29.951 21:24:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:29.951 21:24:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:29.951 21:24:44 -- common/autotest_common.sh@10 -- # set +x 00:19:35.223 21:24:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:35.223 21:24:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:35.223 21:24:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:35.223 21:24:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:35.223 21:24:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:35.223 21:24:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:35.223 21:24:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:35.223 21:24:49 -- nvmf/common.sh@295 -- # net_devs=() 00:19:35.223 21:24:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:35.223 21:24:49 -- nvmf/common.sh@296 -- # e810=() 00:19:35.223 21:24:49 -- nvmf/common.sh@296 -- # local -ga e810 00:19:35.223 21:24:49 -- nvmf/common.sh@297 -- # x722=() 00:19:35.223 21:24:49 -- nvmf/common.sh@297 -- # local -ga x722 00:19:35.223 21:24:49 -- nvmf/common.sh@298 -- # mlx=() 00:19:35.223 21:24:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:35.223 21:24:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.223 21:24:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.223 21:24:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.223 21:24:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.223 21:24:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.223 21:24:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.223 21:24:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.223 21:24:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.223 21:24:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.223 21:24:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.223 21:24:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.223 21:24:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:35.223 21:24:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:35.223 21:24:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.223 21:24:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:35.223 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:35.223 21:24:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.223 21:24:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:35.223 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:35.223 21:24:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:35.223 21:24:49 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.223 21:24:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.223 21:24:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:35.223 21:24:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.223 21:24:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:35.223 Found net devices under 0000:27:00.0: cvl_0_0 00:19:35.223 21:24:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.223 21:24:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.223 21:24:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.223 21:24:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:35.223 21:24:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.223 21:24:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:35.223 Found net devices under 0000:27:00.1: cvl_0_1 00:19:35.223 21:24:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.223 21:24:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:35.223 21:24:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:35.223 21:24:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:35.223 21:24:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:35.223 21:24:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.223 21:24:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.223 21:24:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.223 21:24:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:35.223 21:24:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.223 21:24:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.223 21:24:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:35.223 21:24:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.223 21:24:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.223 21:24:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:35.223 21:24:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:35.223 21:24:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.223 21:24:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.223 21:24:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.223 21:24:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.223 21:24:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:35.223 21:24:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.223 21:24:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.224 21:24:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.224 21:24:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:35.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:19:35.224 00:19:35.224 --- 10.0.0.2 ping statistics --- 00:19:35.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.224 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:19:35.224 21:24:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:19:35.224 00:19:35.224 --- 10.0.0.1 ping statistics --- 00:19:35.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.224 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:35.224 21:24:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.224 21:24:49 -- nvmf/common.sh@411 -- # return 0 00:19:35.224 21:24:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:35.224 21:24:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.224 21:24:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:35.224 21:24:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:35.224 21:24:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.224 21:24:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:35.224 21:24:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:35.224 21:24:49 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:35.224 21:24:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:35.224 21:24:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:35.224 21:24:49 -- common/autotest_common.sh@10 -- # set +x 00:19:35.224 21:24:49 -- nvmf/common.sh@470 -- # nvmfpid=1242886 00:19:35.224 21:24:49 -- nvmf/common.sh@471 -- # waitforlisten 1242886 00:19:35.224 21:24:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:35.224 21:24:49 -- common/autotest_common.sh@817 -- # '[' -z 1242886 ']' 00:19:35.224 21:24:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.224 21:24:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:35.224 21:24:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.224 21:24:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:35.224 21:24:49 -- common/autotest_common.sh@10 -- # set +x 00:19:35.224 [2024-04-24 21:24:50.085009] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:19:35.224 [2024-04-24 21:24:50.085148] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:35.481 [2024-04-24 21:24:50.229070] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.481 [2024-04-24 21:24:50.346659] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.481 [2024-04-24 21:24:50.346704] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.481 [2024-04-24 21:24:50.346715] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.481 [2024-04-24 21:24:50.346725] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.481 [2024-04-24 21:24:50.346733] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.481 [2024-04-24 21:24:50.346937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:35.481 [2024-04-24 21:24:50.347076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:35.481 [2024-04-24 21:24:50.347179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.481 [2024-04-24 21:24:50.347207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:36.049 21:24:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:36.049 21:24:50 -- common/autotest_common.sh@850 -- # return 0 00:19:36.049 21:24:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:36.049 21:24:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:36.049 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:19:36.049 21:24:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.049 21:24:50 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:36.049 21:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.049 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:19:36.049 [2024-04-24 21:24:50.816831] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.049 21:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.049 21:24:50 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:36.049 21:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.049 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:19:36.049 Malloc0 00:19:36.049 21:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.049 21:24:50 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:36.049 21:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.049 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:19:36.049 21:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.049 21:24:50 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:36.049 21:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.049 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:19:36.049 21:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.049 21:24:50 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.049 21:24:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.049 21:24:50 -- common/autotest_common.sh@10 -- # set +x 00:19:36.049 [2024-04-24 21:24:50.880630] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.049 21:24:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.049 21:24:50 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:36.049 21:24:50 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:36.049 21:24:50 -- nvmf/common.sh@521 -- # config=() 00:19:36.049 21:24:50 -- nvmf/common.sh@521 -- # local subsystem config 00:19:36.049 21:24:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:36.049 21:24:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:36.049 { 00:19:36.049 "params": { 00:19:36.049 "name": "Nvme$subsystem", 00:19:36.049 "trtype": "$TEST_TRANSPORT", 00:19:36.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.049 "adrfam": "ipv4", 00:19:36.049 "trsvcid": "$NVMF_PORT", 00:19:36.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.049 "hdgst": ${hdgst:-false}, 00:19:36.049 "ddgst": ${ddgst:-false} 00:19:36.049 }, 00:19:36.049 "method": "bdev_nvme_attach_controller" 00:19:36.049 } 00:19:36.049 EOF 00:19:36.049 )") 00:19:36.049 21:24:50 -- nvmf/common.sh@543 -- # cat 00:19:36.049 21:24:50 -- nvmf/common.sh@545 -- # jq . 00:19:36.049 21:24:50 -- nvmf/common.sh@546 -- # IFS=, 00:19:36.049 21:24:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:36.049 "params": { 00:19:36.049 "name": "Nvme1", 00:19:36.049 "trtype": "tcp", 00:19:36.049 "traddr": "10.0.0.2", 00:19:36.049 "adrfam": "ipv4", 00:19:36.049 "trsvcid": "4420", 00:19:36.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.049 "hdgst": false, 00:19:36.049 "ddgst": false 00:19:36.049 }, 00:19:36.049 "method": "bdev_nvme_attach_controller" 00:19:36.049 }' 00:19:36.049 [2024-04-24 21:24:50.955205] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:19:36.049 [2024-04-24 21:24:50.955355] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1242974 ] 00:19:36.308 [2024-04-24 21:24:51.103802] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:36.308 [2024-04-24 21:24:51.226089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.308 [2024-04-24 21:24:51.226194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.308 [2024-04-24 21:24:51.226205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.877 I/O targets: 00:19:36.877 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:36.877 00:19:36.877 00:19:36.877 CUnit - A unit testing framework for C - Version 2.1-3 00:19:36.877 http://cunit.sourceforge.net/ 00:19:36.877 00:19:36.877 00:19:36.877 Suite: bdevio tests on: Nvme1n1 00:19:36.877 Test: blockdev write read block ...passed 00:19:36.877 Test: blockdev write zeroes read block ...passed 00:19:36.877 Test: blockdev write zeroes read no split ...passed 00:19:36.877 Test: blockdev write zeroes read split ...passed 00:19:36.877 Test: blockdev write zeroes read split partial ...passed 00:19:36.877 Test: blockdev reset ...[2024-04-24 21:24:51.789699] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:36.877 [2024-04-24 21:24:51.789805] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:19:36.877 [2024-04-24 21:24:51.807909] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:36.877 passed 00:19:36.877 Test: blockdev write read 8 blocks ...passed 00:19:36.877 Test: blockdev write read size > 128k ...passed 00:19:36.877 Test: blockdev write read invalid size ...passed 00:19:37.136 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:37.136 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:37.136 Test: blockdev write read max offset ...passed 00:19:37.136 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:37.136 Test: blockdev writev readv 8 blocks ...passed 00:19:37.136 Test: blockdev writev readv 30 x 1block ...passed 00:19:37.136 Test: blockdev writev readv block ...passed 00:19:37.136 Test: blockdev writev readv size > 128k ...passed 00:19:37.136 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:37.136 Test: blockdev comparev and writev ...[2024-04-24 21:24:52.023879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-04-24 21:24:52.023918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.136 [2024-04-24 21:24:52.023935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-04-24 21:24:52.023947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:37.136 [2024-04-24 21:24:52.024368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-04-24 21:24:52.024378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:37.136 [2024-04-24 21:24:52.024391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-04-24 21:24:52.024399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:37.136 [2024-04-24 21:24:52.024765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-04-24 21:24:52.024773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:37.136 [2024-04-24 21:24:52.024786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-04-24 21:24:52.024794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:37.136 [2024-04-24 21:24:52.025175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-04-24 21:24:52.025185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:37.136 [2024-04-24 21:24:52.025198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-04-24 21:24:52.025211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:37.136 passed 00:19:37.395 Test: blockdev nvme passthru rw ...passed 00:19:37.395 Test: blockdev nvme passthru vendor specific ...[2024-04-24 21:24:52.108696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.395 [2024-04-24 21:24:52.108719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:37.395 [2024-04-24 21:24:52.108895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.395 [2024-04-24 21:24:52.108903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:37.395 [2024-04-24 21:24:52.109067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.395 [2024-04-24 21:24:52.109076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:37.395 [2024-04-24 21:24:52.109239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.395 [2024-04-24 21:24:52.109248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:37.395 passed 00:19:37.395 Test: blockdev nvme admin passthru ...passed 00:19:37.395 Test: blockdev copy ...passed 00:19:37.395 00:19:37.395 Run Summary: Type Total Ran Passed Failed Inactive 00:19:37.395 suites 1 1 n/a 0 0 00:19:37.395 tests 23 23 23 0 0 00:19:37.395 asserts 152 152 152 0 n/a 00:19:37.395 00:19:37.395 Elapsed time = 1.187 seconds 00:19:37.653 21:24:52 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:37.653 21:24:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.653 21:24:52 -- common/autotest_common.sh@10 -- # set +x 00:19:37.653 21:24:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.653 21:24:52 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:37.653 21:24:52 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:37.653 21:24:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:37.653 21:24:52 -- nvmf/common.sh@117 -- # sync 00:19:37.653 21:24:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:37.653 21:24:52 -- nvmf/common.sh@120 -- # set +e 00:19:37.653 21:24:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:37.653 21:24:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:37.653 rmmod nvme_tcp 00:19:37.653 rmmod nvme_fabrics 00:19:37.653 rmmod nvme_keyring 00:19:37.653 21:24:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:37.653 21:24:52 -- nvmf/common.sh@124 -- # set -e 00:19:37.653 21:24:52 -- nvmf/common.sh@125 -- # return 0 00:19:37.653 21:24:52 -- nvmf/common.sh@478 -- # '[' -n 1242886 ']' 00:19:37.653 21:24:52 -- nvmf/common.sh@479 -- # killprocess 1242886 00:19:37.653 21:24:52 -- common/autotest_common.sh@936 -- # '[' -z 1242886 ']' 00:19:37.653 21:24:52 -- common/autotest_common.sh@940 -- # kill -0 1242886 00:19:37.653 21:24:52 -- common/autotest_common.sh@941 -- # uname 00:19:37.653 21:24:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:37.653 21:24:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1242886 00:19:37.653 21:24:52 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:19:37.653 21:24:52 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:19:37.653 21:24:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1242886' 00:19:37.653 killing process with pid 1242886 00:19:37.653 21:24:52 -- common/autotest_common.sh@955 -- # kill 1242886 00:19:37.653 21:24:52 -- common/autotest_common.sh@960 -- # wait 1242886 00:19:38.229 21:24:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:38.229 21:24:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:38.229 21:24:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:38.230 21:24:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.230 21:24:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:38.230 21:24:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.230 21:24:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.230 21:24:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.139 21:24:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:40.139 00:19:40.139 real 0m10.324s 00:19:40.139 user 0m14.671s 00:19:40.139 sys 0m4.756s 00:19:40.139 21:24:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:40.139 21:24:55 -- common/autotest_common.sh@10 -- # set +x 00:19:40.139 ************************************ 00:19:40.140 END TEST nvmf_bdevio_no_huge 00:19:40.140 ************************************ 00:19:40.398 21:24:55 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:40.399 21:24:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:40.399 21:24:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:40.399 21:24:55 -- common/autotest_common.sh@10 -- # set +x 00:19:40.399 ************************************ 00:19:40.399 START TEST nvmf_tls 00:19:40.399 ************************************ 00:19:40.399 21:24:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:40.399 * Looking for test storage... 00:19:40.399 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:40.399 21:24:55 -- target/tls.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:40.399 21:24:55 -- nvmf/common.sh@7 -- # uname -s 00:19:40.399 21:24:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.399 21:24:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.399 21:24:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.399 21:24:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.399 21:24:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.399 21:24:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.399 21:24:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.399 21:24:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.399 21:24:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.399 21:24:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.399 21:24:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:19:40.399 21:24:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:19:40.399 21:24:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.399 21:24:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.399 21:24:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:40.399 21:24:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.399 21:24:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:40.399 21:24:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.399 21:24:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.399 21:24:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.399 21:24:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.399 21:24:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.399 21:24:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.399 21:24:55 -- paths/export.sh@5 -- # export PATH 00:19:40.399 21:24:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.399 21:24:55 -- nvmf/common.sh@47 -- # : 0 00:19:40.399 21:24:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:40.399 21:24:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:40.399 21:24:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.399 21:24:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.399 21:24:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.399 21:24:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:40.399 21:24:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:40.399 21:24:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:40.399 21:24:55 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:40.399 21:24:55 -- target/tls.sh@62 -- # nvmftestinit 00:19:40.399 21:24:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:40.399 21:24:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.399 21:24:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:40.399 21:24:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:40.399 21:24:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:40.399 21:24:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.399 21:24:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.399 21:24:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.399 21:24:55 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:19:40.399 21:24:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:40.399 21:24:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:40.399 21:24:55 -- common/autotest_common.sh@10 -- # set +x 00:19:45.672 21:25:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:45.672 21:25:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:45.672 21:25:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:45.672 21:25:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:45.672 21:25:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:45.672 21:25:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:45.672 21:25:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:45.672 21:25:00 -- nvmf/common.sh@295 -- # net_devs=() 00:19:45.672 21:25:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:45.672 21:25:00 -- nvmf/common.sh@296 -- # e810=() 00:19:45.672 21:25:00 -- nvmf/common.sh@296 -- # local -ga e810 00:19:45.672 21:25:00 -- nvmf/common.sh@297 -- # x722=() 00:19:45.672 21:25:00 -- nvmf/common.sh@297 -- # local -ga x722 00:19:45.672 21:25:00 -- nvmf/common.sh@298 -- # mlx=() 00:19:45.672 21:25:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:45.672 21:25:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.672 21:25:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.672 21:25:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.672 21:25:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.672 21:25:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.672 21:25:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.672 21:25:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.672 21:25:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.672 21:25:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.672 21:25:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.672 21:25:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.672 21:25:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:45.672 21:25:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:45.672 21:25:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.672 21:25:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:45.672 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:45.672 21:25:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.672 21:25:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:45.672 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:45.672 21:25:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:45.672 21:25:00 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.672 21:25:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.672 21:25:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:45.672 21:25:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.672 21:25:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:45.672 Found net devices under 0000:27:00.0: cvl_0_0 00:19:45.672 21:25:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.672 21:25:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.672 21:25:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.672 21:25:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:45.672 21:25:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.672 21:25:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:45.672 Found net devices under 0000:27:00.1: cvl_0_1 00:19:45.672 21:25:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.672 21:25:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:45.672 21:25:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:45.672 21:25:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:45.672 21:25:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:45.672 21:25:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.672 21:25:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.672 21:25:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.672 21:25:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:45.672 21:25:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:45.672 21:25:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:45.672 21:25:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:45.672 21:25:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:45.672 21:25:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.672 21:25:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:45.672 21:25:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:45.672 21:25:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:45.672 21:25:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:45.933 21:25:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:45.933 21:25:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:45.933 21:25:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:45.933 21:25:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:45.933 21:25:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:45.933 21:25:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:45.933 21:25:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:45.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:19:45.933 00:19:45.933 --- 10.0.0.2 ping statistics --- 00:19:45.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.933 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:19:45.933 21:25:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:19:45.933 00:19:45.933 --- 10.0.0.1 ping statistics --- 00:19:45.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.933 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:19:45.933 21:25:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.933 21:25:00 -- nvmf/common.sh@411 -- # return 0 00:19:45.933 21:25:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:45.933 21:25:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.933 21:25:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:45.933 21:25:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:45.933 21:25:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.933 21:25:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:45.933 21:25:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:45.933 21:25:00 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:45.933 21:25:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:45.933 21:25:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:45.933 21:25:00 -- common/autotest_common.sh@10 -- # set +x 00:19:45.933 21:25:00 -- nvmf/common.sh@470 -- # nvmfpid=1247463 00:19:45.933 21:25:00 -- nvmf/common.sh@471 -- # waitforlisten 1247463 00:19:45.933 21:25:00 -- common/autotest_common.sh@817 -- # '[' -z 1247463 ']' 00:19:45.933 21:25:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.933 21:25:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:45.933 21:25:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.933 21:25:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:45.933 21:25:00 -- common/autotest_common.sh@10 -- # set +x 00:19:45.933 21:25:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:46.194 [2024-04-24 21:25:00.935464] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:19:46.194 [2024-04-24 21:25:00.935612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.194 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.194 [2024-04-24 21:25:01.078607] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.455 [2024-04-24 21:25:01.176697] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.455 [2024-04-24 21:25:01.176743] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.455 [2024-04-24 21:25:01.176754] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.455 [2024-04-24 21:25:01.176765] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.455 [2024-04-24 21:25:01.176773] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.455 [2024-04-24 21:25:01.176809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.715 21:25:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:46.715 21:25:01 -- common/autotest_common.sh@850 -- # return 0 00:19:46.715 21:25:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:46.715 21:25:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:46.715 21:25:01 -- common/autotest_common.sh@10 -- # set +x 00:19:46.715 21:25:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.715 21:25:01 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:46.715 21:25:01 -- target/tls.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:46.975 true 00:19:46.975 21:25:01 -- target/tls.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:46.975 21:25:01 -- target/tls.sh@73 -- # jq -r .tls_version 00:19:47.235 21:25:01 -- target/tls.sh@73 -- # version=0 00:19:47.235 21:25:01 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:47.235 21:25:01 -- target/tls.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:47.235 21:25:02 -- target/tls.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:47.235 21:25:02 -- target/tls.sh@81 -- # jq -r .tls_version 00:19:47.496 21:25:02 -- target/tls.sh@81 -- # version=13 00:19:47.496 21:25:02 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:47.496 21:25:02 -- target/tls.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:47.496 21:25:02 -- target/tls.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:47.496 21:25:02 -- target/tls.sh@89 -- # jq -r .tls_version 00:19:47.756 21:25:02 -- target/tls.sh@89 -- # version=7 00:19:47.756 21:25:02 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:47.757 21:25:02 -- target/tls.sh@96 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:47.757 21:25:02 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:47.757 21:25:02 -- target/tls.sh@96 -- # ktls=false 00:19:47.757 21:25:02 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:47.757 21:25:02 -- target/tls.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:48.017 21:25:02 -- target/tls.sh@104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:48.017 21:25:02 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:48.017 21:25:02 -- target/tls.sh@104 -- # ktls=true 00:19:48.017 21:25:02 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:48.017 21:25:02 -- target/tls.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:48.281 21:25:03 -- target/tls.sh@112 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:48.281 21:25:03 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:48.281 21:25:03 -- target/tls.sh@112 -- # ktls=false 00:19:48.281 21:25:03 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:48.281 21:25:03 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:48.281 21:25:03 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:48.281 21:25:03 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:48.281 21:25:03 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:19:48.281 21:25:03 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:19:48.281 21:25:03 -- nvmf/common.sh@693 -- # digest=1 00:19:48.281 21:25:03 -- nvmf/common.sh@694 -- # python - 00:19:48.281 21:25:03 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:48.281 21:25:03 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:48.281 21:25:03 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:48.281 21:25:03 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:48.281 21:25:03 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:19:48.281 21:25:03 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:19:48.281 21:25:03 -- nvmf/common.sh@693 -- # digest=1 00:19:48.281 21:25:03 -- nvmf/common.sh@694 -- # python - 00:19:48.539 21:25:03 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:48.539 21:25:03 -- target/tls.sh@121 -- # mktemp 00:19:48.539 21:25:03 -- target/tls.sh@121 -- # key_path=/tmp/tmp.fxgZIEfVzF 00:19:48.539 21:25:03 -- target/tls.sh@122 -- # mktemp 00:19:48.539 21:25:03 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Hwn7JSFT0J 00:19:48.539 21:25:03 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:48.539 21:25:03 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:48.539 21:25:03 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.fxgZIEfVzF 00:19:48.539 21:25:03 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Hwn7JSFT0J 00:19:48.539 21:25:03 -- target/tls.sh@130 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:48.539 21:25:03 -- target/tls.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:48.798 21:25:03 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.fxgZIEfVzF 00:19:48.798 21:25:03 -- target/tls.sh@49 -- # local key=/tmp/tmp.fxgZIEfVzF 00:19:48.798 21:25:03 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:49.057 [2024-04-24 21:25:03.832133] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.057 21:25:03 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:49.057 21:25:03 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:49.318 [2024-04-24 21:25:04.096161] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.318 [2024-04-24 21:25:04.096400] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.318 21:25:04 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:49.318 malloc0 00:19:49.318 21:25:04 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:49.579 21:25:04 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fxgZIEfVzF 00:19:49.579 [2024-04-24 21:25:04.514101] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:49.579 21:25:04 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.fxgZIEfVzF 00:19:49.841 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.912 Initializing NVMe Controllers 00:19:59.912 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:59.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:59.912 Initialization complete. Launching workers. 00:19:59.912 ======================================================== 00:19:59.912 Latency(us) 00:19:59.912 Device Information : IOPS MiB/s Average min max 00:19:59.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16791.70 65.59 3811.77 1088.50 6570.43 00:19:59.912 ======================================================== 00:19:59.912 Total : 16791.70 65.59 3811.77 1088.50 6570.43 00:19:59.912 00:19:59.912 21:25:14 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fxgZIEfVzF 00:19:59.912 21:25:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:59.912 21:25:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:59.912 21:25:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:59.912 21:25:14 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fxgZIEfVzF' 00:19:59.912 21:25:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:59.912 21:25:14 -- target/tls.sh@28 -- # bdevperf_pid=1250091 00:19:59.912 21:25:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:59.912 21:25:14 -- target/tls.sh@31 -- # waitforlisten 1250091 /var/tmp/bdevperf.sock 00:19:59.912 21:25:14 -- common/autotest_common.sh@817 -- # '[' -z 1250091 ']' 00:19:59.912 21:25:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.912 21:25:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:59.912 21:25:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.912 21:25:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:59.912 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:19:59.912 21:25:14 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:59.912 [2024-04-24 21:25:14.776894] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:19:59.912 [2024-04-24 21:25:14.777015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250091 ] 00:19:59.912 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.170 [2024-04-24 21:25:14.891202] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.170 [2024-04-24 21:25:14.985547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.736 21:25:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:00.736 21:25:15 -- common/autotest_common.sh@850 -- # return 0 00:20:00.736 21:25:15 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fxgZIEfVzF 00:20:00.736 [2024-04-24 21:25:15.608148] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.736 [2024-04-24 21:25:15.608265] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:00.736 TLSTESTn1 00:20:00.996 21:25:15 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:00.996 Running I/O for 10 seconds... 00:20:10.973 00:20:10.973 Latency(us) 00:20:10.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.973 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:10.973 Verification LBA range: start 0x0 length 0x2000 00:20:10.973 TLSTESTn1 : 10.01 6176.29 24.13 0.00 0.00 20692.38 5311.87 44426.51 00:20:10.973 =================================================================================================================== 00:20:10.973 Total : 6176.29 24.13 0.00 0.00 20692.38 5311.87 44426.51 00:20:10.973 0 00:20:10.974 21:25:25 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:10.974 21:25:25 -- target/tls.sh@45 -- # killprocess 1250091 00:20:10.974 21:25:25 -- common/autotest_common.sh@936 -- # '[' -z 1250091 ']' 00:20:10.974 21:25:25 -- common/autotest_common.sh@940 -- # kill -0 1250091 00:20:10.974 21:25:25 -- common/autotest_common.sh@941 -- # uname 00:20:10.974 21:25:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:10.974 21:25:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1250091 00:20:10.974 21:25:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:10.974 21:25:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:10.974 21:25:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1250091' 00:20:10.974 killing process with pid 1250091 00:20:10.974 21:25:25 -- common/autotest_common.sh@955 -- # kill 1250091 00:20:10.974 Received shutdown signal, test time was about 10.000000 seconds 00:20:10.974 00:20:10.974 Latency(us) 00:20:10.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.974 =================================================================================================================== 00:20:10.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.974 [2024-04-24 21:25:25.839026] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:10.974 21:25:25 -- common/autotest_common.sh@960 -- # wait 1250091 00:20:11.541 21:25:26 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hwn7JSFT0J 00:20:11.541 21:25:26 -- common/autotest_common.sh@638 -- # local es=0 00:20:11.541 21:25:26 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hwn7JSFT0J 00:20:11.541 21:25:26 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:11.541 21:25:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:11.541 21:25:26 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:11.541 21:25:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:11.541 21:25:26 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hwn7JSFT0J 00:20:11.541 21:25:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:11.541 21:25:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:11.541 21:25:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:11.541 21:25:26 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Hwn7JSFT0J' 00:20:11.541 21:25:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:11.541 21:25:26 -- target/tls.sh@28 -- # bdevperf_pid=1252294 00:20:11.541 21:25:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:11.541 21:25:26 -- target/tls.sh@31 -- # waitforlisten 1252294 /var/tmp/bdevperf.sock 00:20:11.541 21:25:26 -- common/autotest_common.sh@817 -- # '[' -z 1252294 ']' 00:20:11.541 21:25:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.541 21:25:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:11.541 21:25:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.541 21:25:26 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:11.541 21:25:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:11.541 21:25:26 -- common/autotest_common.sh@10 -- # set +x 00:20:11.541 [2024-04-24 21:25:26.287449] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:11.541 [2024-04-24 21:25:26.287567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252294 ] 00:20:11.541 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.541 [2024-04-24 21:25:26.397016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.541 [2024-04-24 21:25:26.491167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.112 21:25:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:12.112 21:25:27 -- common/autotest_common.sh@850 -- # return 0 00:20:12.112 21:25:27 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Hwn7JSFT0J 00:20:12.372 [2024-04-24 21:25:27.124344] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.372 [2024-04-24 21:25:27.124472] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:12.372 [2024-04-24 21:25:27.137078] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:12.372 [2024-04-24 21:25:27.137145] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:12.372 [2024-04-24 21:25:27.138112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:20:12.372 [2024-04-24 21:25:27.139112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:12.372 [2024-04-24 21:25:27.139130] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:12.372 [2024-04-24 21:25:27.139145] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:12.372 request: 00:20:12.372 { 00:20:12.372 "name": "TLSTEST", 00:20:12.372 "trtype": "tcp", 00:20:12.372 "traddr": "10.0.0.2", 00:20:12.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.372 "adrfam": "ipv4", 00:20:12.372 "trsvcid": "4420", 00:20:12.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.372 "psk": "/tmp/tmp.Hwn7JSFT0J", 00:20:12.372 "method": "bdev_nvme_attach_controller", 00:20:12.372 "req_id": 1 00:20:12.372 } 00:20:12.372 Got JSON-RPC error response 00:20:12.372 response: 00:20:12.372 { 00:20:12.372 "code": -32602, 00:20:12.372 "message": "Invalid parameters" 00:20:12.372 } 00:20:12.372 21:25:27 -- target/tls.sh@36 -- # killprocess 1252294 00:20:12.372 21:25:27 -- common/autotest_common.sh@936 -- # '[' -z 1252294 ']' 00:20:12.372 21:25:27 -- common/autotest_common.sh@940 -- # kill -0 1252294 00:20:12.372 21:25:27 -- common/autotest_common.sh@941 -- # uname 00:20:12.372 21:25:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:12.372 21:25:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1252294 00:20:12.372 21:25:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:12.372 21:25:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:12.372 21:25:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1252294' 00:20:12.372 killing process with pid 1252294 00:20:12.372 21:25:27 -- common/autotest_common.sh@955 -- # kill 1252294 00:20:12.372 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.372 00:20:12.372 Latency(us) 00:20:12.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.372 =================================================================================================================== 00:20:12.372 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:12.372 [2024-04-24 21:25:27.200587] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:12.372 21:25:27 -- common/autotest_common.sh@960 -- # wait 1252294 00:20:12.631 21:25:27 -- target/tls.sh@37 -- # return 1 00:20:12.631 21:25:27 -- common/autotest_common.sh@641 -- # es=1 00:20:12.631 21:25:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:12.631 21:25:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:12.631 21:25:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:12.631 21:25:27 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fxgZIEfVzF 00:20:12.631 21:25:27 -- common/autotest_common.sh@638 -- # local es=0 00:20:12.631 21:25:27 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fxgZIEfVzF 00:20:12.631 21:25:27 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:12.631 21:25:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:12.631 21:25:27 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:12.631 21:25:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:12.631 21:25:27 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fxgZIEfVzF 00:20:12.631 21:25:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:12.632 21:25:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:12.632 21:25:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:12.632 21:25:27 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fxgZIEfVzF' 00:20:12.632 21:25:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.632 21:25:27 -- target/tls.sh@28 -- # bdevperf_pid=1252606 00:20:12.632 21:25:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.632 21:25:27 -- target/tls.sh@31 -- # waitforlisten 1252606 /var/tmp/bdevperf.sock 00:20:12.632 21:25:27 -- common/autotest_common.sh@817 -- # '[' -z 1252606 ']' 00:20:12.632 21:25:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.632 21:25:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:12.632 21:25:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.632 21:25:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:12.632 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:20:12.632 21:25:27 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:12.890 [2024-04-24 21:25:27.637811] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:12.890 [2024-04-24 21:25:27.637923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252606 ] 00:20:12.890 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.890 [2024-04-24 21:25:27.723575] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.890 [2024-04-24 21:25:27.818428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.459 21:25:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:13.459 21:25:28 -- common/autotest_common.sh@850 -- # return 0 00:20:13.459 21:25:28 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.fxgZIEfVzF 00:20:13.720 [2024-04-24 21:25:28.479716] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.720 [2024-04-24 21:25:28.479846] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:13.720 [2024-04-24 21:25:28.486940] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:13.720 [2024-04-24 21:25:28.486974] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:13.720 [2024-04-24 21:25:28.487013] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:13.720 [2024-04-24 21:25:28.487353] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:20:13.720 [2024-04-24 21:25:28.488332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:20:13.721 [2024-04-24 21:25:28.489324] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:13.721 [2024-04-24 21:25:28.489343] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:13.721 [2024-04-24 21:25:28.489355] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:13.721 request: 00:20:13.721 { 00:20:13.721 "name": "TLSTEST", 00:20:13.721 "trtype": "tcp", 00:20:13.721 "traddr": "10.0.0.2", 00:20:13.721 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:13.721 "adrfam": "ipv4", 00:20:13.721 "trsvcid": "4420", 00:20:13.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.721 "psk": "/tmp/tmp.fxgZIEfVzF", 00:20:13.721 "method": "bdev_nvme_attach_controller", 00:20:13.721 "req_id": 1 00:20:13.721 } 00:20:13.721 Got JSON-RPC error response 00:20:13.721 response: 00:20:13.721 { 00:20:13.721 "code": -32602, 00:20:13.721 "message": "Invalid parameters" 00:20:13.721 } 00:20:13.721 21:25:28 -- target/tls.sh@36 -- # killprocess 1252606 00:20:13.721 21:25:28 -- common/autotest_common.sh@936 -- # '[' -z 1252606 ']' 00:20:13.721 21:25:28 -- common/autotest_common.sh@940 -- # kill -0 1252606 00:20:13.721 21:25:28 -- common/autotest_common.sh@941 -- # uname 00:20:13.721 21:25:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:13.721 21:25:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1252606 00:20:13.721 21:25:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:13.721 21:25:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:13.721 21:25:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1252606' 00:20:13.721 killing process with pid 1252606 00:20:13.721 21:25:28 -- common/autotest_common.sh@955 -- # kill 1252606 00:20:13.721 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.721 00:20:13.721 Latency(us) 00:20:13.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.721 =================================================================================================================== 00:20:13.721 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:13.721 [2024-04-24 21:25:28.561847] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:13.721 21:25:28 -- common/autotest_common.sh@960 -- # wait 1252606 00:20:13.981 21:25:28 -- target/tls.sh@37 -- # return 1 00:20:13.981 21:25:28 -- common/autotest_common.sh@641 -- # es=1 00:20:13.981 21:25:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:13.981 21:25:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:13.981 21:25:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:13.981 21:25:28 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fxgZIEfVzF 00:20:13.981 21:25:28 -- common/autotest_common.sh@638 -- # local es=0 00:20:13.981 21:25:28 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fxgZIEfVzF 00:20:13.981 21:25:28 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:13.981 21:25:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:13.981 21:25:28 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:13.981 21:25:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:13.981 21:25:28 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fxgZIEfVzF 00:20:13.981 21:25:28 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:13.981 21:25:28 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:13.981 21:25:28 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:13.981 21:25:28 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fxgZIEfVzF' 00:20:13.981 21:25:28 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:13.981 21:25:28 -- target/tls.sh@28 -- # bdevperf_pid=1252840 00:20:13.981 21:25:28 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:13.981 21:25:28 -- target/tls.sh@31 -- # waitforlisten 1252840 /var/tmp/bdevperf.sock 00:20:13.981 21:25:28 -- common/autotest_common.sh@817 -- # '[' -z 1252840 ']' 00:20:13.981 21:25:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.981 21:25:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:13.981 21:25:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.981 21:25:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:13.981 21:25:28 -- common/autotest_common.sh@10 -- # set +x 00:20:13.981 21:25:28 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:14.242 [2024-04-24 21:25:29.002577] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:14.242 [2024-04-24 21:25:29.002725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252840 ] 00:20:14.242 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.242 [2024-04-24 21:25:29.132152] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.500 [2024-04-24 21:25:29.233284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.758 21:25:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:14.758 21:25:29 -- common/autotest_common.sh@850 -- # return 0 00:20:14.758 21:25:29 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fxgZIEfVzF 00:20:15.018 [2024-04-24 21:25:29.823459] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.018 [2024-04-24 21:25:29.823590] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:15.018 [2024-04-24 21:25:29.837014] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:15.018 [2024-04-24 21:25:29.837047] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:15.018 [2024-04-24 21:25:29.837080] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:15.018 [2024-04-24 21:25:29.837484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:20:15.018 [2024-04-24 21:25:29.838461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:20:15.018 [2024-04-24 21:25:29.839457] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:15.018 [2024-04-24 21:25:29.839472] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:15.018 [2024-04-24 21:25:29.839483] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:15.018 request: 00:20:15.018 { 00:20:15.018 "name": "TLSTEST", 00:20:15.018 "trtype": "tcp", 00:20:15.018 "traddr": "10.0.0.2", 00:20:15.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.018 "adrfam": "ipv4", 00:20:15.018 "trsvcid": "4420", 00:20:15.018 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:15.018 "psk": "/tmp/tmp.fxgZIEfVzF", 00:20:15.018 "method": "bdev_nvme_attach_controller", 00:20:15.018 "req_id": 1 00:20:15.018 } 00:20:15.018 Got JSON-RPC error response 00:20:15.018 response: 00:20:15.018 { 00:20:15.018 "code": -32602, 00:20:15.018 "message": "Invalid parameters" 00:20:15.018 } 00:20:15.018 21:25:29 -- target/tls.sh@36 -- # killprocess 1252840 00:20:15.018 21:25:29 -- common/autotest_common.sh@936 -- # '[' -z 1252840 ']' 00:20:15.018 21:25:29 -- common/autotest_common.sh@940 -- # kill -0 1252840 00:20:15.018 21:25:29 -- common/autotest_common.sh@941 -- # uname 00:20:15.018 21:25:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:15.018 21:25:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1252840 00:20:15.018 21:25:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:15.018 21:25:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:15.018 21:25:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1252840' 00:20:15.018 killing process with pid 1252840 00:20:15.018 21:25:29 -- common/autotest_common.sh@955 -- # kill 1252840 00:20:15.018 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.018 00:20:15.018 Latency(us) 00:20:15.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.018 =================================================================================================================== 00:20:15.018 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:15.018 [2024-04-24 21:25:29.893550] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:15.018 21:25:29 -- common/autotest_common.sh@960 -- # wait 1252840 00:20:15.589 21:25:30 -- target/tls.sh@37 -- # return 1 00:20:15.589 21:25:30 -- common/autotest_common.sh@641 -- # es=1 00:20:15.589 21:25:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:15.589 21:25:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:15.589 21:25:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:15.589 21:25:30 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:15.589 21:25:30 -- common/autotest_common.sh@638 -- # local es=0 00:20:15.589 21:25:30 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:15.589 21:25:30 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:15.589 21:25:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:15.589 21:25:30 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:15.589 21:25:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:15.589 21:25:30 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:15.589 21:25:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:15.589 21:25:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:15.589 21:25:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:15.589 21:25:30 -- target/tls.sh@23 -- # psk= 00:20:15.589 21:25:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.589 21:25:30 -- target/tls.sh@28 -- # bdevperf_pid=1253024 00:20:15.589 21:25:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.589 21:25:30 -- target/tls.sh@31 -- # waitforlisten 1253024 /var/tmp/bdevperf.sock 00:20:15.589 21:25:30 -- common/autotest_common.sh@817 -- # '[' -z 1253024 ']' 00:20:15.589 21:25:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.589 21:25:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:15.589 21:25:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.589 21:25:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:15.589 21:25:30 -- common/autotest_common.sh@10 -- # set +x 00:20:15.589 21:25:30 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.589 [2024-04-24 21:25:30.348841] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:15.589 [2024-04-24 21:25:30.348998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253024 ] 00:20:15.589 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.589 [2024-04-24 21:25:30.480948] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.851 [2024-04-24 21:25:30.578846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.109 21:25:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:16.109 21:25:31 -- common/autotest_common.sh@850 -- # return 0 00:20:16.109 21:25:31 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:16.367 [2024-04-24 21:25:31.187949] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:16.367 [2024-04-24 21:25:31.189808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:20:16.367 [2024-04-24 21:25:31.190801] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:16.367 [2024-04-24 21:25:31.190816] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:16.367 [2024-04-24 21:25:31.190848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:16.367 request: 00:20:16.367 { 00:20:16.367 "name": "TLSTEST", 00:20:16.367 "trtype": "tcp", 00:20:16.367 "traddr": "10.0.0.2", 00:20:16.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.367 "adrfam": "ipv4", 00:20:16.367 "trsvcid": "4420", 00:20:16.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.367 "method": "bdev_nvme_attach_controller", 00:20:16.367 "req_id": 1 00:20:16.367 } 00:20:16.367 Got JSON-RPC error response 00:20:16.367 response: 00:20:16.367 { 00:20:16.367 "code": -32602, 00:20:16.367 "message": "Invalid parameters" 00:20:16.367 } 00:20:16.367 21:25:31 -- target/tls.sh@36 -- # killprocess 1253024 00:20:16.367 21:25:31 -- common/autotest_common.sh@936 -- # '[' -z 1253024 ']' 00:20:16.367 21:25:31 -- common/autotest_common.sh@940 -- # kill -0 1253024 00:20:16.367 21:25:31 -- common/autotest_common.sh@941 -- # uname 00:20:16.367 21:25:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:16.367 21:25:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1253024 00:20:16.367 21:25:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:16.367 21:25:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:16.367 21:25:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1253024' 00:20:16.367 killing process with pid 1253024 00:20:16.367 21:25:31 -- common/autotest_common.sh@955 -- # kill 1253024 00:20:16.367 Received shutdown signal, test time was about 10.000000 seconds 00:20:16.367 00:20:16.367 Latency(us) 00:20:16.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.367 =================================================================================================================== 00:20:16.367 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:16.367 21:25:31 -- common/autotest_common.sh@960 -- # wait 1253024 00:20:16.935 21:25:31 -- target/tls.sh@37 -- # return 1 00:20:16.935 21:25:31 -- common/autotest_common.sh@641 -- # es=1 00:20:16.935 21:25:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:16.935 21:25:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:16.935 21:25:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:16.935 21:25:31 -- target/tls.sh@158 -- # killprocess 1247463 00:20:16.935 21:25:31 -- common/autotest_common.sh@936 -- # '[' -z 1247463 ']' 00:20:16.935 21:25:31 -- common/autotest_common.sh@940 -- # kill -0 1247463 00:20:16.935 21:25:31 -- common/autotest_common.sh@941 -- # uname 00:20:16.935 21:25:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:16.935 21:25:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1247463 00:20:16.935 21:25:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:16.935 21:25:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:16.935 21:25:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1247463' 00:20:16.935 killing process with pid 1247463 00:20:16.935 21:25:31 -- common/autotest_common.sh@955 -- # kill 1247463 00:20:16.935 [2024-04-24 21:25:31.638978] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:16.935 21:25:31 -- common/autotest_common.sh@960 -- # wait 1247463 00:20:17.506 21:25:32 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:17.506 21:25:32 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:17.506 21:25:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:17.506 21:25:32 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:20:17.506 21:25:32 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:17.506 21:25:32 -- nvmf/common.sh@693 -- # digest=2 00:20:17.506 21:25:32 -- nvmf/common.sh@694 -- # python - 00:20:17.506 21:25:32 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:17.506 21:25:32 -- target/tls.sh@160 -- # mktemp 00:20:17.506 21:25:32 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.hXGadZvObd 00:20:17.506 21:25:32 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:17.506 21:25:32 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.hXGadZvObd 00:20:17.506 21:25:32 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:17.506 21:25:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:17.506 21:25:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:17.506 21:25:32 -- common/autotest_common.sh@10 -- # set +x 00:20:17.506 21:25:32 -- nvmf/common.sh@470 -- # nvmfpid=1253542 00:20:17.506 21:25:32 -- nvmf/common.sh@471 -- # waitforlisten 1253542 00:20:17.506 21:25:32 -- common/autotest_common.sh@817 -- # '[' -z 1253542 ']' 00:20:17.506 21:25:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.506 21:25:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:17.506 21:25:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.506 21:25:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:17.506 21:25:32 -- common/autotest_common.sh@10 -- # set +x 00:20:17.506 21:25:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:17.506 [2024-04-24 21:25:32.355490] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:17.506 [2024-04-24 21:25:32.355636] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.506 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.767 [2024-04-24 21:25:32.499464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.767 [2024-04-24 21:25:32.593918] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.767 [2024-04-24 21:25:32.593970] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.767 [2024-04-24 21:25:32.593981] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.767 [2024-04-24 21:25:32.593991] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.767 [2024-04-24 21:25:32.594000] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.767 [2024-04-24 21:25:32.594033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.335 21:25:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:18.335 21:25:33 -- common/autotest_common.sh@850 -- # return 0 00:20:18.335 21:25:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:18.335 21:25:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:18.335 21:25:33 -- common/autotest_common.sh@10 -- # set +x 00:20:18.335 21:25:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.335 21:25:33 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.hXGadZvObd 00:20:18.335 21:25:33 -- target/tls.sh@49 -- # local key=/tmp/tmp.hXGadZvObd 00:20:18.335 21:25:33 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:18.335 [2024-04-24 21:25:33.264375] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.335 21:25:33 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:18.594 21:25:33 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:18.594 [2024-04-24 21:25:33.548444] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.594 [2024-04-24 21:25:33.548669] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.855 21:25:33 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:18.855 malloc0 00:20:18.855 21:25:33 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:19.116 21:25:33 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hXGadZvObd 00:20:19.116 [2024-04-24 21:25:34.008351] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:19.116 21:25:34 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hXGadZvObd 00:20:19.116 21:25:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:19.116 21:25:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:19.116 21:25:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:19.116 21:25:34 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hXGadZvObd' 00:20:19.116 21:25:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.116 21:25:34 -- target/tls.sh@28 -- # bdevperf_pid=1253874 00:20:19.116 21:25:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:19.116 21:25:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:19.116 21:25:34 -- target/tls.sh@31 -- # waitforlisten 1253874 /var/tmp/bdevperf.sock 00:20:19.116 21:25:34 -- common/autotest_common.sh@817 -- # '[' -z 1253874 ']' 00:20:19.116 21:25:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.116 21:25:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:19.116 21:25:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.116 21:25:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:19.116 21:25:34 -- common/autotest_common.sh@10 -- # set +x 00:20:19.377 [2024-04-24 21:25:34.109748] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:19.377 [2024-04-24 21:25:34.109904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1253874 ] 00:20:19.377 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.377 [2024-04-24 21:25:34.241000] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.377 [2024-04-24 21:25:34.337754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.945 21:25:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:19.945 21:25:34 -- common/autotest_common.sh@850 -- # return 0 00:20:19.945 21:25:34 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hXGadZvObd 00:20:20.204 [2024-04-24 21:25:34.943713] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.204 [2024-04-24 21:25:34.943843] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:20.204 TLSTESTn1 00:20:20.204 21:25:35 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:20.204 Running I/O for 10 seconds... 00:20:30.214 00:20:30.214 Latency(us) 00:20:30.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.214 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:30.214 Verification LBA range: start 0x0 length 0x2000 00:20:30.214 TLSTESTn1 : 10.02 5379.39 21.01 0.00 0.00 23753.84 6588.09 66225.85 00:20:30.214 =================================================================================================================== 00:20:30.214 Total : 5379.39 21.01 0.00 0.00 23753.84 6588.09 66225.85 00:20:30.214 0 00:20:30.214 21:25:45 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:30.214 21:25:45 -- target/tls.sh@45 -- # killprocess 1253874 00:20:30.214 21:25:45 -- common/autotest_common.sh@936 -- # '[' -z 1253874 ']' 00:20:30.214 21:25:45 -- common/autotest_common.sh@940 -- # kill -0 1253874 00:20:30.214 21:25:45 -- common/autotest_common.sh@941 -- # uname 00:20:30.214 21:25:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:30.214 21:25:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1253874 00:20:30.474 21:25:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:30.474 21:25:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:30.474 21:25:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1253874' 00:20:30.474 killing process with pid 1253874 00:20:30.474 21:25:45 -- common/autotest_common.sh@955 -- # kill 1253874 00:20:30.474 Received shutdown signal, test time was about 10.000000 seconds 00:20:30.474 00:20:30.474 Latency(us) 00:20:30.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.474 =================================================================================================================== 00:20:30.474 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:30.474 [2024-04-24 21:25:45.193386] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:30.474 21:25:45 -- common/autotest_common.sh@960 -- # wait 1253874 00:20:30.733 21:25:45 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.hXGadZvObd 00:20:30.733 21:25:45 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hXGadZvObd 00:20:30.733 21:25:45 -- common/autotest_common.sh@638 -- # local es=0 00:20:30.733 21:25:45 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hXGadZvObd 00:20:30.733 21:25:45 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:20:30.733 21:25:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:30.733 21:25:45 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:20:30.733 21:25:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:30.733 21:25:45 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hXGadZvObd 00:20:30.733 21:25:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:30.733 21:25:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:30.733 21:25:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:30.733 21:25:45 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hXGadZvObd' 00:20:30.733 21:25:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.733 21:25:45 -- target/tls.sh@28 -- # bdevperf_pid=1255972 00:20:30.733 21:25:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.733 21:25:45 -- target/tls.sh@31 -- # waitforlisten 1255972 /var/tmp/bdevperf.sock 00:20:30.733 21:25:45 -- common/autotest_common.sh@817 -- # '[' -z 1255972 ']' 00:20:30.733 21:25:45 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.733 21:25:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.733 21:25:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:30.733 21:25:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.733 21:25:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:30.733 21:25:45 -- common/autotest_common.sh@10 -- # set +x 00:20:30.733 [2024-04-24 21:25:45.658704] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:30.733 [2024-04-24 21:25:45.658824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255972 ] 00:20:30.991 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.991 [2024-04-24 21:25:45.772766] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.991 [2024-04-24 21:25:45.868603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.558 21:25:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:31.558 21:25:46 -- common/autotest_common.sh@850 -- # return 0 00:20:31.558 21:25:46 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hXGadZvObd 00:20:31.558 [2024-04-24 21:25:46.478094] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.558 [2024-04-24 21:25:46.478172] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:31.558 [2024-04-24 21:25:46.478185] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.hXGadZvObd 00:20:31.558 request: 00:20:31.558 { 00:20:31.558 "name": "TLSTEST", 00:20:31.558 "trtype": "tcp", 00:20:31.558 "traddr": "10.0.0.2", 00:20:31.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.558 "adrfam": "ipv4", 00:20:31.558 "trsvcid": "4420", 00:20:31.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.558 "psk": "/tmp/tmp.hXGadZvObd", 00:20:31.558 "method": "bdev_nvme_attach_controller", 00:20:31.558 "req_id": 1 00:20:31.558 } 00:20:31.558 Got JSON-RPC error response 00:20:31.558 response: 00:20:31.558 { 00:20:31.558 "code": -1, 00:20:31.558 "message": "Operation not permitted" 00:20:31.558 } 00:20:31.558 21:25:46 -- target/tls.sh@36 -- # killprocess 1255972 00:20:31.558 21:25:46 -- common/autotest_common.sh@936 -- # '[' -z 1255972 ']' 00:20:31.558 21:25:46 -- common/autotest_common.sh@940 -- # kill -0 1255972 00:20:31.558 21:25:46 -- common/autotest_common.sh@941 -- # uname 00:20:31.558 21:25:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:31.558 21:25:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1255972 00:20:31.824 21:25:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:31.824 21:25:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:31.824 21:25:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1255972' 00:20:31.824 killing process with pid 1255972 00:20:31.824 21:25:46 -- common/autotest_common.sh@955 -- # kill 1255972 00:20:31.824 Received shutdown signal, test time was about 10.000000 seconds 00:20:31.824 00:20:31.824 Latency(us) 00:20:31.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.824 =================================================================================================================== 00:20:31.824 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:31.824 21:25:46 -- common/autotest_common.sh@960 -- # wait 1255972 00:20:32.084 21:25:46 -- target/tls.sh@37 -- # return 1 00:20:32.084 21:25:46 -- common/autotest_common.sh@641 -- # es=1 00:20:32.084 21:25:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:32.084 21:25:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:32.084 21:25:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:32.084 21:25:46 -- target/tls.sh@174 -- # killprocess 1253542 00:20:32.084 21:25:46 -- common/autotest_common.sh@936 -- # '[' -z 1253542 ']' 00:20:32.084 21:25:46 -- common/autotest_common.sh@940 -- # kill -0 1253542 00:20:32.084 21:25:46 -- common/autotest_common.sh@941 -- # uname 00:20:32.084 21:25:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.084 21:25:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1253542 00:20:32.084 21:25:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:32.084 21:25:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:32.084 21:25:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1253542' 00:20:32.084 killing process with pid 1253542 00:20:32.084 21:25:46 -- common/autotest_common.sh@955 -- # kill 1253542 00:20:32.084 [2024-04-24 21:25:46.977979] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:32.084 21:25:46 -- common/autotest_common.sh@960 -- # wait 1253542 00:20:32.652 21:25:47 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:32.652 21:25:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:32.652 21:25:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:32.652 21:25:47 -- common/autotest_common.sh@10 -- # set +x 00:20:32.652 21:25:47 -- nvmf/common.sh@470 -- # nvmfpid=1256411 00:20:32.652 21:25:47 -- nvmf/common.sh@471 -- # waitforlisten 1256411 00:20:32.652 21:25:47 -- common/autotest_common.sh@817 -- # '[' -z 1256411 ']' 00:20:32.652 21:25:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.652 21:25:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:32.652 21:25:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.652 21:25:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:32.652 21:25:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:32.652 21:25:47 -- common/autotest_common.sh@10 -- # set +x 00:20:32.652 [2024-04-24 21:25:47.569107] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:32.652 [2024-04-24 21:25:47.569219] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.910 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.911 [2024-04-24 21:25:47.691302] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.911 [2024-04-24 21:25:47.787393] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.911 [2024-04-24 21:25:47.787432] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.911 [2024-04-24 21:25:47.787444] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.911 [2024-04-24 21:25:47.787453] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.911 [2024-04-24 21:25:47.787460] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.911 [2024-04-24 21:25:47.787490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.481 21:25:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:33.481 21:25:48 -- common/autotest_common.sh@850 -- # return 0 00:20:33.481 21:25:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:33.481 21:25:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:33.481 21:25:48 -- common/autotest_common.sh@10 -- # set +x 00:20:33.481 21:25:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.481 21:25:48 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.hXGadZvObd 00:20:33.481 21:25:48 -- common/autotest_common.sh@638 -- # local es=0 00:20:33.481 21:25:48 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.hXGadZvObd 00:20:33.481 21:25:48 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:20:33.481 21:25:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:33.481 21:25:48 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:20:33.481 21:25:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:33.481 21:25:48 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.hXGadZvObd 00:20:33.481 21:25:48 -- target/tls.sh@49 -- # local key=/tmp/tmp.hXGadZvObd 00:20:33.481 21:25:48 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:33.481 [2024-04-24 21:25:48.437018] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.880 21:25:48 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:33.880 21:25:48 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:33.880 [2024-04-24 21:25:48.733075] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:33.880 [2024-04-24 21:25:48.733332] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.880 21:25:48 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:34.173 malloc0 00:20:34.173 21:25:48 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:34.432 21:25:49 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hXGadZvObd 00:20:34.432 [2024-04-24 21:25:49.265041] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:34.432 [2024-04-24 21:25:49.265080] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:34.432 [2024-04-24 21:25:49.265108] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:34.432 request: 00:20:34.432 { 00:20:34.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.432 "host": "nqn.2016-06.io.spdk:host1", 00:20:34.432 "psk": "/tmp/tmp.hXGadZvObd", 00:20:34.432 "method": "nvmf_subsystem_add_host", 00:20:34.432 "req_id": 1 00:20:34.432 } 00:20:34.432 Got JSON-RPC error response 00:20:34.432 response: 00:20:34.432 { 00:20:34.432 "code": -32603, 00:20:34.432 "message": "Internal error" 00:20:34.432 } 00:20:34.432 21:25:49 -- common/autotest_common.sh@641 -- # es=1 00:20:34.432 21:25:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:34.432 21:25:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:34.432 21:25:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:34.432 21:25:49 -- target/tls.sh@180 -- # killprocess 1256411 00:20:34.432 21:25:49 -- common/autotest_common.sh@936 -- # '[' -z 1256411 ']' 00:20:34.432 21:25:49 -- common/autotest_common.sh@940 -- # kill -0 1256411 00:20:34.432 21:25:49 -- common/autotest_common.sh@941 -- # uname 00:20:34.432 21:25:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:34.432 21:25:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1256411 00:20:34.432 21:25:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:34.432 21:25:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:34.432 21:25:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1256411' 00:20:34.432 killing process with pid 1256411 00:20:34.432 21:25:49 -- common/autotest_common.sh@955 -- # kill 1256411 00:20:34.433 21:25:49 -- common/autotest_common.sh@960 -- # wait 1256411 00:20:35.000 21:25:49 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.hXGadZvObd 00:20:35.000 21:25:49 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:35.000 21:25:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:35.000 21:25:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:35.000 21:25:49 -- common/autotest_common.sh@10 -- # set +x 00:20:35.000 21:25:49 -- nvmf/common.sh@470 -- # nvmfpid=1256910 00:20:35.000 21:25:49 -- nvmf/common.sh@471 -- # waitforlisten 1256910 00:20:35.000 21:25:49 -- common/autotest_common.sh@817 -- # '[' -z 1256910 ']' 00:20:35.000 21:25:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:35.000 21:25:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.000 21:25:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:35.000 21:25:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.000 21:25:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:35.000 21:25:49 -- common/autotest_common.sh@10 -- # set +x 00:20:35.000 [2024-04-24 21:25:49.889703] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:35.000 [2024-04-24 21:25:49.889812] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.261 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.261 [2024-04-24 21:25:50.014450] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.261 [2024-04-24 21:25:50.112774] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.261 [2024-04-24 21:25:50.112823] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.261 [2024-04-24 21:25:50.112832] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.261 [2024-04-24 21:25:50.112842] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.261 [2024-04-24 21:25:50.112850] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.261 [2024-04-24 21:25:50.112885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.828 21:25:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:35.828 21:25:50 -- common/autotest_common.sh@850 -- # return 0 00:20:35.828 21:25:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:35.828 21:25:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:35.828 21:25:50 -- common/autotest_common.sh@10 -- # set +x 00:20:35.828 21:25:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.828 21:25:50 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.hXGadZvObd 00:20:35.828 21:25:50 -- target/tls.sh@49 -- # local key=/tmp/tmp.hXGadZvObd 00:20:35.828 21:25:50 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:35.828 [2024-04-24 21:25:50.752719] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.828 21:25:50 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:36.087 21:25:50 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:36.345 [2024-04-24 21:25:51.060749] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:36.345 [2024-04-24 21:25:51.060973] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.345 21:25:51 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:36.345 malloc0 00:20:36.345 21:25:51 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:36.603 21:25:51 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hXGadZvObd 00:20:36.603 [2024-04-24 21:25:51.493643] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:36.603 21:25:51 -- target/tls.sh@188 -- # bdevperf_pid=1257242 00:20:36.603 21:25:51 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.603 21:25:51 -- target/tls.sh@191 -- # waitforlisten 1257242 /var/tmp/bdevperf.sock 00:20:36.603 21:25:51 -- common/autotest_common.sh@817 -- # '[' -z 1257242 ']' 00:20:36.603 21:25:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.603 21:25:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:36.603 21:25:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.603 21:25:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:36.603 21:25:51 -- common/autotest_common.sh@10 -- # set +x 00:20:36.603 21:25:51 -- target/tls.sh@187 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.862 [2024-04-24 21:25:51.578836] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:36.862 [2024-04-24 21:25:51.578945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257242 ] 00:20:36.862 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.862 [2024-04-24 21:25:51.689208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.862 [2024-04-24 21:25:51.783148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.429 21:25:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:37.429 21:25:52 -- common/autotest_common.sh@850 -- # return 0 00:20:37.429 21:25:52 -- target/tls.sh@192 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hXGadZvObd 00:20:37.429 [2024-04-24 21:25:52.385253] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.429 [2024-04-24 21:25:52.385384] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:37.688 TLSTESTn1 00:20:37.688 21:25:52 -- target/tls.sh@196 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py save_config 00:20:37.947 21:25:52 -- target/tls.sh@196 -- # tgtconf='{ 00:20:37.947 "subsystems": [ 00:20:37.947 { 00:20:37.947 "subsystem": "keyring", 00:20:37.947 "config": [] 00:20:37.947 }, 00:20:37.947 { 00:20:37.947 "subsystem": "iobuf", 00:20:37.947 "config": [ 00:20:37.947 { 00:20:37.947 "method": "iobuf_set_options", 00:20:37.947 "params": { 00:20:37.947 "small_pool_count": 8192, 00:20:37.947 "large_pool_count": 1024, 00:20:37.947 "small_bufsize": 8192, 00:20:37.947 "large_bufsize": 135168 00:20:37.947 } 00:20:37.947 } 00:20:37.947 ] 00:20:37.947 }, 00:20:37.947 { 00:20:37.947 "subsystem": "sock", 00:20:37.947 "config": [ 00:20:37.947 { 00:20:37.947 "method": "sock_impl_set_options", 00:20:37.948 "params": { 00:20:37.948 "impl_name": "posix", 00:20:37.948 "recv_buf_size": 2097152, 00:20:37.948 "send_buf_size": 2097152, 00:20:37.948 "enable_recv_pipe": true, 00:20:37.948 "enable_quickack": false, 00:20:37.948 "enable_placement_id": 0, 00:20:37.948 "enable_zerocopy_send_server": true, 00:20:37.948 "enable_zerocopy_send_client": false, 00:20:37.948 "zerocopy_threshold": 0, 00:20:37.948 "tls_version": 0, 00:20:37.948 "enable_ktls": false 00:20:37.948 } 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "method": "sock_impl_set_options", 00:20:37.948 "params": { 00:20:37.948 "impl_name": "ssl", 00:20:37.948 "recv_buf_size": 4096, 00:20:37.948 "send_buf_size": 4096, 00:20:37.948 "enable_recv_pipe": true, 00:20:37.948 "enable_quickack": false, 00:20:37.948 "enable_placement_id": 0, 00:20:37.948 "enable_zerocopy_send_server": true, 00:20:37.948 "enable_zerocopy_send_client": false, 00:20:37.948 "zerocopy_threshold": 0, 00:20:37.948 "tls_version": 0, 00:20:37.948 "enable_ktls": false 00:20:37.948 } 00:20:37.948 } 00:20:37.948 ] 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "subsystem": "vmd", 00:20:37.948 "config": [] 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "subsystem": "accel", 00:20:37.948 "config": [ 00:20:37.948 { 00:20:37.948 "method": "accel_set_options", 00:20:37.948 "params": { 00:20:37.948 "small_cache_size": 128, 00:20:37.948 "large_cache_size": 16, 00:20:37.948 "task_count": 2048, 00:20:37.948 "sequence_count": 2048, 00:20:37.948 "buf_count": 2048 00:20:37.948 } 00:20:37.948 } 00:20:37.948 ] 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "subsystem": "bdev", 00:20:37.948 "config": [ 00:20:37.948 { 00:20:37.948 "method": "bdev_set_options", 00:20:37.948 "params": { 00:20:37.948 "bdev_io_pool_size": 65535, 00:20:37.948 "bdev_io_cache_size": 256, 00:20:37.948 "bdev_auto_examine": true, 00:20:37.948 "iobuf_small_cache_size": 128, 00:20:37.948 "iobuf_large_cache_size": 16 00:20:37.948 } 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "method": "bdev_raid_set_options", 00:20:37.948 "params": { 00:20:37.948 "process_window_size_kb": 1024 00:20:37.948 } 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "method": "bdev_iscsi_set_options", 00:20:37.948 "params": { 00:20:37.948 "timeout_sec": 30 00:20:37.948 } 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "method": "bdev_nvme_set_options", 00:20:37.948 "params": { 00:20:37.948 "action_on_timeout": "none", 00:20:37.948 "timeout_us": 0, 00:20:37.948 "timeout_admin_us": 0, 00:20:37.948 "keep_alive_timeout_ms": 10000, 00:20:37.948 "arbitration_burst": 0, 00:20:37.948 "low_priority_weight": 0, 00:20:37.948 "medium_priority_weight": 0, 00:20:37.948 "high_priority_weight": 0, 00:20:37.948 "nvme_adminq_poll_period_us": 10000, 00:20:37.948 "nvme_ioq_poll_period_us": 0, 00:20:37.948 "io_queue_requests": 0, 00:20:37.948 "delay_cmd_submit": true, 00:20:37.948 "transport_retry_count": 4, 00:20:37.948 "bdev_retry_count": 3, 00:20:37.948 "transport_ack_timeout": 0, 00:20:37.948 "ctrlr_loss_timeout_sec": 0, 00:20:37.948 "reconnect_delay_sec": 0, 00:20:37.948 "fast_io_fail_timeout_sec": 0, 00:20:37.948 "disable_auto_failback": false, 00:20:37.948 "generate_uuids": false, 00:20:37.948 "transport_tos": 0, 00:20:37.948 "nvme_error_stat": false, 00:20:37.948 "rdma_srq_size": 0, 00:20:37.948 "io_path_stat": false, 00:20:37.948 "allow_accel_sequence": false, 00:20:37.948 "rdma_max_cq_size": 0, 00:20:37.948 "rdma_cm_event_timeout_ms": 0, 00:20:37.948 "dhchap_digests": [ 00:20:37.948 "sha256", 00:20:37.948 "sha384", 00:20:37.948 "sha512" 00:20:37.948 ], 00:20:37.948 "dhchap_dhgroups": [ 00:20:37.948 "null", 00:20:37.948 "ffdhe2048", 00:20:37.948 "ffdhe3072", 00:20:37.948 "ffdhe4096", 00:20:37.948 "ffdhe6144", 00:20:37.948 "ffdhe8192" 00:20:37.948 ] 00:20:37.948 } 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "method": "bdev_nvme_set_hotplug", 00:20:37.948 "params": { 00:20:37.948 "period_us": 100000, 00:20:37.948 "enable": false 00:20:37.948 } 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "method": "bdev_malloc_create", 00:20:37.948 "params": { 00:20:37.948 "name": "malloc0", 00:20:37.948 "num_blocks": 8192, 00:20:37.948 "block_size": 4096, 00:20:37.948 "physical_block_size": 4096, 00:20:37.948 "uuid": "e12bc469-760f-4a77-a2eb-0b33411ab625", 00:20:37.948 "optimal_io_boundary": 0 00:20:37.948 } 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "method": "bdev_wait_for_examine" 00:20:37.948 } 00:20:37.948 ] 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "subsystem": "nbd", 00:20:37.948 "config": [] 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "subsystem": "scheduler", 00:20:37.948 "config": [ 00:20:37.948 { 00:20:37.948 "method": "framework_set_scheduler", 00:20:37.948 "params": { 00:20:37.948 "name": "static" 00:20:37.948 } 00:20:37.948 } 00:20:37.948 ] 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "subsystem": "nvmf", 00:20:37.948 "config": [ 00:20:37.948 { 00:20:37.948 "method": "nvmf_set_config", 00:20:37.948 "params": { 00:20:37.948 "discovery_filter": "match_any", 00:20:37.948 "admin_cmd_passthru": { 00:20:37.948 "identify_ctrlr": false 00:20:37.948 } 00:20:37.948 } 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "method": "nvmf_set_max_subsystems", 00:20:37.948 "params": { 00:20:37.948 "max_subsystems": 1024 00:20:37.948 } 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "method": "nvmf_set_crdt", 00:20:37.948 "params": { 00:20:37.948 "crdt1": 0, 00:20:37.948 "crdt2": 0, 00:20:37.948 "crdt3": 0 00:20:37.948 } 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "method": "nvmf_create_transport", 00:20:37.948 "params": { 00:20:37.948 "trtype": "TCP", 00:20:37.948 "max_queue_depth": 128, 00:20:37.948 "max_io_qpairs_per_ctrlr": 127, 00:20:37.948 "in_capsule_data_size": 4096, 00:20:37.948 "max_io_size": 131072, 00:20:37.948 "io_unit_size": 131072, 00:20:37.948 "max_aq_depth": 128, 00:20:37.948 "num_shared_buffers": 511, 00:20:37.948 "buf_cache_size": 4294967295, 00:20:37.948 "dif_insert_or_strip": false, 00:20:37.948 "zcopy": false, 00:20:37.948 "c2h_success": false, 00:20:37.948 "sock_priority": 0, 00:20:37.948 "abort_timeout_sec": 1, 00:20:37.948 "ack_timeout": 0 00:20:37.948 } 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "method": "nvmf_create_subsystem", 00:20:37.948 "params": { 00:20:37.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.948 "allow_any_host": false, 00:20:37.948 "serial_number": "SPDK00000000000001", 00:20:37.948 "model_number": "SPDK bdev Controller", 00:20:37.948 "max_namespaces": 10, 00:20:37.948 "min_cntlid": 1, 00:20:37.948 "max_cntlid": 65519, 00:20:37.948 "ana_reporting": false 00:20:37.948 } 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "method": "nvmf_subsystem_add_host", 00:20:37.948 "params": { 00:20:37.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.948 "host": "nqn.2016-06.io.spdk:host1", 00:20:37.948 "psk": "/tmp/tmp.hXGadZvObd" 00:20:37.948 } 00:20:37.948 }, 00:20:37.948 { 00:20:37.948 "method": "nvmf_subsystem_add_ns", 00:20:37.948 "params": { 00:20:37.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.948 "namespace": { 00:20:37.948 "nsid": 1, 00:20:37.949 "bdev_name": "malloc0", 00:20:37.949 "nguid": "E12BC469760F4A77A2EB0B33411AB625", 00:20:37.949 "uuid": "e12bc469-760f-4a77-a2eb-0b33411ab625", 00:20:37.949 "no_auto_visible": false 00:20:37.949 } 00:20:37.949 } 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "method": "nvmf_subsystem_add_listener", 00:20:37.949 "params": { 00:20:37.949 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.949 "listen_address": { 00:20:37.949 "trtype": "TCP", 00:20:37.949 "adrfam": "IPv4", 00:20:37.949 "traddr": "10.0.0.2", 00:20:37.949 "trsvcid": "4420" 00:20:37.949 }, 00:20:37.949 "secure_channel": true 00:20:37.949 } 00:20:37.949 } 00:20:37.949 ] 00:20:37.949 } 00:20:37.949 ] 00:20:37.949 }' 00:20:37.949 21:25:52 -- target/tls.sh@197 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:37.949 21:25:52 -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:37.949 "subsystems": [ 00:20:37.949 { 00:20:37.949 "subsystem": "keyring", 00:20:37.949 "config": [] 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "subsystem": "iobuf", 00:20:37.949 "config": [ 00:20:37.949 { 00:20:37.949 "method": "iobuf_set_options", 00:20:37.949 "params": { 00:20:37.949 "small_pool_count": 8192, 00:20:37.949 "large_pool_count": 1024, 00:20:37.949 "small_bufsize": 8192, 00:20:37.949 "large_bufsize": 135168 00:20:37.949 } 00:20:37.949 } 00:20:37.949 ] 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "subsystem": "sock", 00:20:37.949 "config": [ 00:20:37.949 { 00:20:37.949 "method": "sock_impl_set_options", 00:20:37.949 "params": { 00:20:37.949 "impl_name": "posix", 00:20:37.949 "recv_buf_size": 2097152, 00:20:37.949 "send_buf_size": 2097152, 00:20:37.949 "enable_recv_pipe": true, 00:20:37.949 "enable_quickack": false, 00:20:37.949 "enable_placement_id": 0, 00:20:37.949 "enable_zerocopy_send_server": true, 00:20:37.949 "enable_zerocopy_send_client": false, 00:20:37.949 "zerocopy_threshold": 0, 00:20:37.949 "tls_version": 0, 00:20:37.949 "enable_ktls": false 00:20:37.949 } 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "method": "sock_impl_set_options", 00:20:37.949 "params": { 00:20:37.949 "impl_name": "ssl", 00:20:37.949 "recv_buf_size": 4096, 00:20:37.949 "send_buf_size": 4096, 00:20:37.949 "enable_recv_pipe": true, 00:20:37.949 "enable_quickack": false, 00:20:37.949 "enable_placement_id": 0, 00:20:37.949 "enable_zerocopy_send_server": true, 00:20:37.949 "enable_zerocopy_send_client": false, 00:20:37.949 "zerocopy_threshold": 0, 00:20:37.949 "tls_version": 0, 00:20:37.949 "enable_ktls": false 00:20:37.949 } 00:20:37.949 } 00:20:37.949 ] 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "subsystem": "vmd", 00:20:37.949 "config": [] 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "subsystem": "accel", 00:20:37.949 "config": [ 00:20:37.949 { 00:20:37.949 "method": "accel_set_options", 00:20:37.949 "params": { 00:20:37.949 "small_cache_size": 128, 00:20:37.949 "large_cache_size": 16, 00:20:37.949 "task_count": 2048, 00:20:37.949 "sequence_count": 2048, 00:20:37.949 "buf_count": 2048 00:20:37.949 } 00:20:37.949 } 00:20:37.949 ] 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "subsystem": "bdev", 00:20:37.949 "config": [ 00:20:37.949 { 00:20:37.949 "method": "bdev_set_options", 00:20:37.949 "params": { 00:20:37.949 "bdev_io_pool_size": 65535, 00:20:37.949 "bdev_io_cache_size": 256, 00:20:37.949 "bdev_auto_examine": true, 00:20:37.949 "iobuf_small_cache_size": 128, 00:20:37.949 "iobuf_large_cache_size": 16 00:20:37.949 } 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "method": "bdev_raid_set_options", 00:20:37.949 "params": { 00:20:37.949 "process_window_size_kb": 1024 00:20:37.949 } 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "method": "bdev_iscsi_set_options", 00:20:37.949 "params": { 00:20:37.949 "timeout_sec": 30 00:20:37.949 } 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "method": "bdev_nvme_set_options", 00:20:37.949 "params": { 00:20:37.949 "action_on_timeout": "none", 00:20:37.949 "timeout_us": 0, 00:20:37.949 "timeout_admin_us": 0, 00:20:37.949 "keep_alive_timeout_ms": 10000, 00:20:37.949 "arbitration_burst": 0, 00:20:37.949 "low_priority_weight": 0, 00:20:37.949 "medium_priority_weight": 0, 00:20:37.949 "high_priority_weight": 0, 00:20:37.949 "nvme_adminq_poll_period_us": 10000, 00:20:37.949 "nvme_ioq_poll_period_us": 0, 00:20:37.949 "io_queue_requests": 512, 00:20:37.949 "delay_cmd_submit": true, 00:20:37.949 "transport_retry_count": 4, 00:20:37.949 "bdev_retry_count": 3, 00:20:37.949 "transport_ack_timeout": 0, 00:20:37.949 "ctrlr_loss_timeout_sec": 0, 00:20:37.949 "reconnect_delay_sec": 0, 00:20:37.949 "fast_io_fail_timeout_sec": 0, 00:20:37.949 "disable_auto_failback": false, 00:20:37.949 "generate_uuids": false, 00:20:37.949 "transport_tos": 0, 00:20:37.949 "nvme_error_stat": false, 00:20:37.949 "rdma_srq_size": 0, 00:20:37.949 "io_path_stat": false, 00:20:37.949 "allow_accel_sequence": false, 00:20:37.949 "rdma_max_cq_size": 0, 00:20:37.949 "rdma_cm_event_timeout_ms": 0, 00:20:37.949 "dhchap_digests": [ 00:20:37.949 "sha256", 00:20:37.949 "sha384", 00:20:37.949 "sha512" 00:20:37.949 ], 00:20:37.949 "dhchap_dhgroups": [ 00:20:37.949 "null", 00:20:37.949 "ffdhe2048", 00:20:37.949 "ffdhe3072", 00:20:37.949 "ffdhe4096", 00:20:37.949 "ffdhe6144", 00:20:37.949 "ffdhe8192" 00:20:37.949 ] 00:20:37.949 } 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "method": "bdev_nvme_attach_controller", 00:20:37.949 "params": { 00:20:37.949 "name": "TLSTEST", 00:20:37.949 "trtype": "TCP", 00:20:37.949 "adrfam": "IPv4", 00:20:37.949 "traddr": "10.0.0.2", 00:20:37.949 "trsvcid": "4420", 00:20:37.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.949 "prchk_reftag": false, 00:20:37.949 "prchk_guard": false, 00:20:37.949 "ctrlr_loss_timeout_sec": 0, 00:20:37.949 "reconnect_delay_sec": 0, 00:20:37.949 "fast_io_fail_timeout_sec": 0, 00:20:37.949 "psk": "/tmp/tmp.hXGadZvObd", 00:20:37.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.949 "hdgst": false, 00:20:37.949 "ddgst": false 00:20:37.949 } 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "method": "bdev_nvme_set_hotplug", 00:20:37.949 "params": { 00:20:37.949 "period_us": 100000, 00:20:37.949 "enable": false 00:20:37.949 } 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "method": "bdev_wait_for_examine" 00:20:37.949 } 00:20:37.949 ] 00:20:37.949 }, 00:20:37.949 { 00:20:37.949 "subsystem": "nbd", 00:20:37.949 "config": [] 00:20:37.949 } 00:20:37.949 ] 00:20:37.949 }' 00:20:37.949 21:25:52 -- target/tls.sh@199 -- # killprocess 1257242 00:20:37.949 21:25:52 -- common/autotest_common.sh@936 -- # '[' -z 1257242 ']' 00:20:37.949 21:25:52 -- common/autotest_common.sh@940 -- # kill -0 1257242 00:20:37.949 21:25:52 -- common/autotest_common.sh@941 -- # uname 00:20:37.949 21:25:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:37.949 21:25:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1257242 00:20:38.207 21:25:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:38.207 21:25:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:38.207 21:25:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1257242' 00:20:38.207 killing process with pid 1257242 00:20:38.207 21:25:52 -- common/autotest_common.sh@955 -- # kill 1257242 00:20:38.207 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.207 00:20:38.208 Latency(us) 00:20:38.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.208 =================================================================================================================== 00:20:38.208 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.208 [2024-04-24 21:25:52.933081] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:38.208 21:25:52 -- common/autotest_common.sh@960 -- # wait 1257242 00:20:38.466 21:25:53 -- target/tls.sh@200 -- # killprocess 1256910 00:20:38.466 21:25:53 -- common/autotest_common.sh@936 -- # '[' -z 1256910 ']' 00:20:38.466 21:25:53 -- common/autotest_common.sh@940 -- # kill -0 1256910 00:20:38.466 21:25:53 -- common/autotest_common.sh@941 -- # uname 00:20:38.466 21:25:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:38.466 21:25:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1256910 00:20:38.466 21:25:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:38.466 21:25:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:38.466 21:25:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1256910' 00:20:38.466 killing process with pid 1256910 00:20:38.466 21:25:53 -- common/autotest_common.sh@955 -- # kill 1256910 00:20:38.466 [2024-04-24 21:25:53.334476] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:38.466 21:25:53 -- common/autotest_common.sh@960 -- # wait 1256910 00:20:39.035 21:25:53 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:39.035 21:25:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:39.035 21:25:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:39.035 21:25:53 -- common/autotest_common.sh@10 -- # set +x 00:20:39.035 21:25:53 -- target/tls.sh@203 -- # echo '{ 00:20:39.035 "subsystems": [ 00:20:39.035 { 00:20:39.035 "subsystem": "keyring", 00:20:39.035 "config": [] 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "subsystem": "iobuf", 00:20:39.035 "config": [ 00:20:39.035 { 00:20:39.035 "method": "iobuf_set_options", 00:20:39.035 "params": { 00:20:39.035 "small_pool_count": 8192, 00:20:39.035 "large_pool_count": 1024, 00:20:39.035 "small_bufsize": 8192, 00:20:39.035 "large_bufsize": 135168 00:20:39.035 } 00:20:39.035 } 00:20:39.035 ] 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "subsystem": "sock", 00:20:39.035 "config": [ 00:20:39.035 { 00:20:39.035 "method": "sock_impl_set_options", 00:20:39.035 "params": { 00:20:39.035 "impl_name": "posix", 00:20:39.035 "recv_buf_size": 2097152, 00:20:39.035 "send_buf_size": 2097152, 00:20:39.035 "enable_recv_pipe": true, 00:20:39.035 "enable_quickack": false, 00:20:39.035 "enable_placement_id": 0, 00:20:39.035 "enable_zerocopy_send_server": true, 00:20:39.035 "enable_zerocopy_send_client": false, 00:20:39.035 "zerocopy_threshold": 0, 00:20:39.035 "tls_version": 0, 00:20:39.035 "enable_ktls": false 00:20:39.035 } 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "method": "sock_impl_set_options", 00:20:39.035 "params": { 00:20:39.035 "impl_name": "ssl", 00:20:39.035 "recv_buf_size": 4096, 00:20:39.035 "send_buf_size": 4096, 00:20:39.035 "enable_recv_pipe": true, 00:20:39.035 "enable_quickack": false, 00:20:39.035 "enable_placement_id": 0, 00:20:39.035 "enable_zerocopy_send_server": true, 00:20:39.035 "enable_zerocopy_send_client": false, 00:20:39.035 "zerocopy_threshold": 0, 00:20:39.035 "tls_version": 0, 00:20:39.035 "enable_ktls": false 00:20:39.035 } 00:20:39.035 } 00:20:39.035 ] 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "subsystem": "vmd", 00:20:39.035 "config": [] 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "subsystem": "accel", 00:20:39.035 "config": [ 00:20:39.035 { 00:20:39.035 "method": "accel_set_options", 00:20:39.035 "params": { 00:20:39.035 "small_cache_size": 128, 00:20:39.035 "large_cache_size": 16, 00:20:39.035 "task_count": 2048, 00:20:39.035 "sequence_count": 2048, 00:20:39.035 "buf_count": 2048 00:20:39.035 } 00:20:39.035 } 00:20:39.035 ] 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "subsystem": "bdev", 00:20:39.035 "config": [ 00:20:39.035 { 00:20:39.035 "method": "bdev_set_options", 00:20:39.035 "params": { 00:20:39.035 "bdev_io_pool_size": 65535, 00:20:39.035 "bdev_io_cache_size": 256, 00:20:39.035 "bdev_auto_examine": true, 00:20:39.035 "iobuf_small_cache_size": 128, 00:20:39.035 "iobuf_large_cache_size": 16 00:20:39.035 } 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "method": "bdev_raid_set_options", 00:20:39.035 "params": { 00:20:39.035 "process_window_size_kb": 1024 00:20:39.035 } 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "method": "bdev_iscsi_set_options", 00:20:39.035 "params": { 00:20:39.035 "timeout_sec": 30 00:20:39.035 } 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "method": "bdev_nvme_set_options", 00:20:39.035 "params": { 00:20:39.035 "action_on_timeout": "none", 00:20:39.035 "timeout_us": 0, 00:20:39.035 "timeout_admin_us": 0, 00:20:39.035 "keep_alive_timeout_ms": 10000, 00:20:39.035 "arbitration_burst": 0, 00:20:39.035 "low_priority_weight": 0, 00:20:39.035 "medium_priority_weight": 0, 00:20:39.035 "high_priority_weight": 0, 00:20:39.035 "nvme_adminq_poll_period_us": 10000, 00:20:39.035 "nvme_ioq_poll_period_us": 0, 00:20:39.035 "io_queue_requests": 0, 00:20:39.035 "delay_cmd_submit": true, 00:20:39.035 "transport_retry_count": 4, 00:20:39.035 "bdev_retry_count": 3, 00:20:39.035 "transport_ack_timeout": 0, 00:20:39.035 "ctrlr_loss_timeout_sec": 0, 00:20:39.035 "reconnect_delay_sec": 0, 00:20:39.035 "fast_io_fail_timeout_sec": 0, 00:20:39.035 "disable_auto_failback": false, 00:20:39.035 "generate_uuids": false, 00:20:39.035 "transport_tos": 0, 00:20:39.035 "nvme_error_stat": false, 00:20:39.035 "rdma_srq_size": 0, 00:20:39.035 "io_path_stat": false, 00:20:39.035 "allow_accel_sequence": false, 00:20:39.035 "rdma_max_cq_size": 0, 00:20:39.035 "rdma_cm_event_timeout_ms": 0, 00:20:39.035 "dhchap_digests": [ 00:20:39.035 "sha256", 00:20:39.035 "sha384", 00:20:39.035 "sha512" 00:20:39.035 ], 00:20:39.035 "dhchap_dhgroups": [ 00:20:39.035 "null", 00:20:39.035 "ffdhe2048", 00:20:39.035 "ffdhe3072", 00:20:39.035 "ffdhe4096", 00:20:39.035 "ffdhe6144", 00:20:39.035 "ffdhe8192" 00:20:39.035 ] 00:20:39.035 } 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "method": "bdev_nvme_set_hotplug", 00:20:39.035 "params": { 00:20:39.035 "period_us": 100000, 00:20:39.035 "enable": false 00:20:39.035 } 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "method": "bdev_malloc_create", 00:20:39.035 "params": { 00:20:39.035 "name": "malloc0", 00:20:39.035 "num_blocks": 8192, 00:20:39.035 "block_size": 4096, 00:20:39.035 "physical_block_size": 4096, 00:20:39.035 "uuid": "e12bc469-760f-4a77-a2eb-0b33411ab625", 00:20:39.035 "optimal_io_boundary": 0 00:20:39.035 } 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "method": "bdev_wait_for_examine" 00:20:39.035 } 00:20:39.035 ] 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "subsystem": "nbd", 00:20:39.035 "config": [] 00:20:39.035 }, 00:20:39.035 { 00:20:39.035 "subsystem": "scheduler", 00:20:39.035 "config": [ 00:20:39.035 { 00:20:39.035 "method": "framework_set_scheduler", 00:20:39.035 "params": { 00:20:39.036 "name": "static" 00:20:39.036 } 00:20:39.036 } 00:20:39.036 ] 00:20:39.036 }, 00:20:39.036 { 00:20:39.036 "subsystem": "nvmf", 00:20:39.036 "config": [ 00:20:39.036 { 00:20:39.036 "method": "nvmf_set_config", 00:20:39.036 "params": { 00:20:39.036 "discovery_filter": "match_any", 00:20:39.036 "admin_cmd_passthru": { 00:20:39.036 "identify_ctrlr": false 00:20:39.036 } 00:20:39.036 } 00:20:39.036 }, 00:20:39.036 { 00:20:39.036 "method": "nvmf_set_max_subsystems", 00:20:39.036 "params": { 00:20:39.036 "max_subsystems": 1024 00:20:39.036 } 00:20:39.036 }, 00:20:39.036 { 00:20:39.036 "method": "nvmf_set_crdt", 00:20:39.036 "params": { 00:20:39.036 "crdt1": 0, 00:20:39.036 "crdt2": 0, 00:20:39.036 "crdt3": 0 00:20:39.036 } 00:20:39.036 }, 00:20:39.036 { 00:20:39.036 "method": "nvmf_create_transport", 00:20:39.036 "params": { 00:20:39.036 "trtype": "TCP", 00:20:39.036 "max_queue_depth": 128, 00:20:39.036 "max_io_qpairs_per_ctrlr": 127, 00:20:39.036 "in_capsule_data_size": 4096, 00:20:39.036 "max_io_size": 131072, 00:20:39.036 "io_unit_size": 131072, 00:20:39.036 "max_aq_depth": 128, 00:20:39.036 "num_shared_buffers": 511, 00:20:39.036 "buf_cache_size": 4294967295, 00:20:39.036 "dif_insert_or_strip": false, 00:20:39.036 "zcopy": false, 00:20:39.036 "c2h_success": false, 00:20:39.036 "sock_priority": 0, 00:20:39.036 "abort_timeout_sec": 1, 00:20:39.036 "ack_timeout": 0 00:20:39.036 } 00:20:39.036 }, 00:20:39.036 { 00:20:39.036 "method": "nvmf_create_subsystem", 00:20:39.036 "params": { 00:20:39.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.036 "allow_any_host": false, 00:20:39.036 "serial_number": "SPDK00000000000001", 00:20:39.036 "model_number": "SPDK bdev Controller", 00:20:39.036 "max_namespaces": 10, 00:20:39.036 "min_cntlid": 1, 00:20:39.036 "max_cntlid": 65519, 00:20:39.036 "ana_reporting": false 00:20:39.036 } 00:20:39.036 }, 00:20:39.036 { 00:20:39.036 "method": "nvmf_subsystem_add_host", 00:20:39.036 "params": { 00:20:39.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.036 "host": "nqn.2016-06.io.spdk:host1", 00:20:39.036 "psk": "/tmp/tmp.hXGadZvObd" 00:20:39.036 } 00:20:39.036 }, 00:20:39.036 { 00:20:39.036 "method": "nvmf_subsystem_add_ns", 00:20:39.036 "params": { 00:20:39.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.036 "namespace": { 00:20:39.036 "nsid": 1, 00:20:39.036 "bdev_name": "malloc0", 00:20:39.036 "nguid": "E12BC469760F4A77A2EB0B33411AB625", 00:20:39.036 "uuid": "e12bc469-760f-4a77-a2eb-0b33411ab625", 00:20:39.036 "no_auto_visible": false 00:20:39.036 } 00:20:39.036 } 00:20:39.036 }, 00:20:39.036 { 00:20:39.036 "method": "nvmf_subsystem_add_listener", 00:20:39.036 "params": { 00:20:39.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.036 "listen_address": { 00:20:39.036 "trtype": "TCP", 00:20:39.036 "adrfam": "IPv4", 00:20:39.036 "traddr": "10.0.0.2", 00:20:39.036 "trsvcid": "4420" 00:20:39.036 }, 00:20:39.036 "secure_channel": true 00:20:39.036 } 00:20:39.036 } 00:20:39.036 ] 00:20:39.036 } 00:20:39.036 ] 00:20:39.036 }' 00:20:39.036 21:25:53 -- nvmf/common.sh@470 -- # nvmfpid=1257724 00:20:39.036 21:25:53 -- nvmf/common.sh@471 -- # waitforlisten 1257724 00:20:39.036 21:25:53 -- common/autotest_common.sh@817 -- # '[' -z 1257724 ']' 00:20:39.036 21:25:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.036 21:25:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:39.036 21:25:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.036 21:25:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:39.036 21:25:53 -- common/autotest_common.sh@10 -- # set +x 00:20:39.036 21:25:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:39.036 [2024-04-24 21:25:53.913770] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:39.036 [2024-04-24 21:25:53.913879] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.036 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.296 [2024-04-24 21:25:54.039732] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.296 [2024-04-24 21:25:54.136846] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.296 [2024-04-24 21:25:54.136883] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.296 [2024-04-24 21:25:54.136893] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.296 [2024-04-24 21:25:54.136903] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.296 [2024-04-24 21:25:54.136911] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.296 [2024-04-24 21:25:54.136992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.556 [2024-04-24 21:25:54.428286] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.556 [2024-04-24 21:25:54.444233] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:39.556 [2024-04-24 21:25:54.460246] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.556 [2024-04-24 21:25:54.460475] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.815 21:25:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:39.815 21:25:54 -- common/autotest_common.sh@850 -- # return 0 00:20:39.815 21:25:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:39.815 21:25:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:39.815 21:25:54 -- common/autotest_common.sh@10 -- # set +x 00:20:39.815 21:25:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.815 21:25:54 -- target/tls.sh@207 -- # bdevperf_pid=1257881 00:20:39.815 21:25:54 -- target/tls.sh@208 -- # waitforlisten 1257881 /var/tmp/bdevperf.sock 00:20:39.815 21:25:54 -- common/autotest_common.sh@817 -- # '[' -z 1257881 ']' 00:20:39.815 21:25:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.815 21:25:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:39.815 21:25:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.815 21:25:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:39.815 21:25:54 -- common/autotest_common.sh@10 -- # set +x 00:20:39.815 21:25:54 -- target/tls.sh@204 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:39.815 21:25:54 -- target/tls.sh@204 -- # echo '{ 00:20:39.815 "subsystems": [ 00:20:39.815 { 00:20:39.815 "subsystem": "keyring", 00:20:39.815 "config": [] 00:20:39.815 }, 00:20:39.815 { 00:20:39.815 "subsystem": "iobuf", 00:20:39.815 "config": [ 00:20:39.815 { 00:20:39.815 "method": "iobuf_set_options", 00:20:39.815 "params": { 00:20:39.815 "small_pool_count": 8192, 00:20:39.815 "large_pool_count": 1024, 00:20:39.815 "small_bufsize": 8192, 00:20:39.815 "large_bufsize": 135168 00:20:39.815 } 00:20:39.815 } 00:20:39.815 ] 00:20:39.815 }, 00:20:39.815 { 00:20:39.815 "subsystem": "sock", 00:20:39.815 "config": [ 00:20:39.815 { 00:20:39.816 "method": "sock_impl_set_options", 00:20:39.816 "params": { 00:20:39.816 "impl_name": "posix", 00:20:39.816 "recv_buf_size": 2097152, 00:20:39.816 "send_buf_size": 2097152, 00:20:39.816 "enable_recv_pipe": true, 00:20:39.816 "enable_quickack": false, 00:20:39.816 "enable_placement_id": 0, 00:20:39.816 "enable_zerocopy_send_server": true, 00:20:39.816 "enable_zerocopy_send_client": false, 00:20:39.816 "zerocopy_threshold": 0, 00:20:39.816 "tls_version": 0, 00:20:39.816 "enable_ktls": false 00:20:39.816 } 00:20:39.816 }, 00:20:39.816 { 00:20:39.816 "method": "sock_impl_set_options", 00:20:39.816 "params": { 00:20:39.816 "impl_name": "ssl", 00:20:39.816 "recv_buf_size": 4096, 00:20:39.816 "send_buf_size": 4096, 00:20:39.816 "enable_recv_pipe": true, 00:20:39.816 "enable_quickack": false, 00:20:39.816 "enable_placement_id": 0, 00:20:39.816 "enable_zerocopy_send_server": true, 00:20:39.816 "enable_zerocopy_send_client": false, 00:20:39.816 "zerocopy_threshold": 0, 00:20:39.816 "tls_version": 0, 00:20:39.816 "enable_ktls": false 00:20:39.816 } 00:20:39.816 } 00:20:39.816 ] 00:20:39.816 }, 00:20:39.816 { 00:20:39.816 "subsystem": "vmd", 00:20:39.816 "config": [] 00:20:39.816 }, 00:20:39.816 { 00:20:39.816 "subsystem": "accel", 00:20:39.816 "config": [ 00:20:39.816 { 00:20:39.816 "method": "accel_set_options", 00:20:39.816 "params": { 00:20:39.816 "small_cache_size": 128, 00:20:39.816 "large_cache_size": 16, 00:20:39.816 "task_count": 2048, 00:20:39.816 "sequence_count": 2048, 00:20:39.816 "buf_count": 2048 00:20:39.816 } 00:20:39.816 } 00:20:39.816 ] 00:20:39.816 }, 00:20:39.816 { 00:20:39.816 "subsystem": "bdev", 00:20:39.816 "config": [ 00:20:39.816 { 00:20:39.816 "method": "bdev_set_options", 00:20:39.816 "params": { 00:20:39.816 "bdev_io_pool_size": 65535, 00:20:39.816 "bdev_io_cache_size": 256, 00:20:39.816 "bdev_auto_examine": true, 00:20:39.816 "iobuf_small_cache_size": 128, 00:20:39.816 "iobuf_large_cache_size": 16 00:20:39.816 } 00:20:39.816 }, 00:20:39.816 { 00:20:39.816 "method": "bdev_raid_set_options", 00:20:39.816 "params": { 00:20:39.816 "process_window_size_kb": 1024 00:20:39.816 } 00:20:39.816 }, 00:20:39.816 { 00:20:39.816 "method": "bdev_iscsi_set_options", 00:20:39.816 "params": { 00:20:39.816 "timeout_sec": 30 00:20:39.816 } 00:20:39.816 }, 00:20:39.816 { 00:20:39.816 "method": "bdev_nvme_set_options", 00:20:39.816 "params": { 00:20:39.816 "action_on_timeout": "none", 00:20:39.816 "timeout_us": 0, 00:20:39.816 "timeout_admin_us": 0, 00:20:39.816 "keep_alive_timeout_ms": 10000, 00:20:39.816 "arbitration_burst": 0, 00:20:39.816 "low_priority_weight": 0, 00:20:39.816 "medium_priority_weight": 0, 00:20:39.816 "high_priority_weight": 0, 00:20:39.816 "nvme_adminq_poll_period_us": 10000, 00:20:39.816 "nvme_ioq_poll_period_us": 0, 00:20:39.816 "io_queue_requests": 512, 00:20:39.816 "delay_cmd_submit": true, 00:20:39.816 "transport_retry_count": 4, 00:20:39.816 "bdev_retry_count": 3, 00:20:39.816 "transport_ack_timeout": 0, 00:20:39.816 "ctrlr_loss_timeout_sec": 0, 00:20:39.816 "reconnect_delay_sec": 0, 00:20:39.816 "fast_io_fail_timeout_sec": 0, 00:20:39.816 "disable_auto_failback": false, 00:20:39.816 "generate_uuids": false, 00:20:39.816 "transport_tos": 0, 00:20:39.816 "nvme_error_stat": false, 00:20:39.816 "rdma_srq_size": 0, 00:20:39.816 "io_path_stat": false, 00:20:39.816 "allow_accel_sequence": false, 00:20:39.816 "rdma_max_cq_size": 0, 00:20:39.816 "rdma_cm_event_timeout_ms": 0, 00:20:39.816 "dhchap_digests": [ 00:20:39.816 "sha256", 00:20:39.816 "sha384", 00:20:39.816 "sha512" 00:20:39.816 ], 00:20:39.816 "dhchap_dhgroups": [ 00:20:39.816 "null", 00:20:39.816 "ffdhe2048", 00:20:39.816 "ffdhe3072", 00:20:39.816 "ffdhe4096", 00:20:39.816 "ffdhe6144", 00:20:39.816 "ffdhe8192" 00:20:39.816 ] 00:20:39.816 } 00:20:39.816 }, 00:20:39.816 { 00:20:39.816 "method": "bdev_nvme_attach_controller", 00:20:39.816 "params": { 00:20:39.816 "name": "TLSTEST", 00:20:39.816 "trtype": "TCP", 00:20:39.816 "adrfam": "IPv4", 00:20:39.816 "traddr": "10.0.0.2", 00:20:39.816 "trsvcid": "4420", 00:20:39.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.816 "prchk_reftag": false, 00:20:39.816 "prchk_guard": false, 00:20:39.816 "ctrlr_loss_timeout_sec": 0, 00:20:39.816 "reconnect_delay_sec": 0, 00:20:39.816 "fast_io_fail_timeout_sec": 0, 00:20:39.816 "psk": "/tmp/tmp.hXGadZvObd", 00:20:39.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.816 "hdgst": false, 00:20:39.816 "ddgst": false 00:20:39.816 } 00:20:39.816 }, 00:20:39.816 { 00:20:39.816 "method": "bdev_nvme_set_hotplug", 00:20:39.817 "params": { 00:20:39.817 "period_us": 100000, 00:20:39.817 "enable": false 00:20:39.817 } 00:20:39.817 }, 00:20:39.817 { 00:20:39.817 "method": "bdev_wait_for_examine" 00:20:39.817 } 00:20:39.817 ] 00:20:39.817 }, 00:20:39.817 { 00:20:39.817 "subsystem": "nbd", 00:20:39.817 "config": [] 00:20:39.817 } 00:20:39.817 ] 00:20:39.817 }' 00:20:39.817 [2024-04-24 21:25:54.708449] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:39.817 [2024-04-24 21:25:54.708553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257881 ] 00:20:39.817 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.075 [2024-04-24 21:25:54.810890] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.075 [2024-04-24 21:25:54.905135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.333 [2024-04-24 21:25:55.113066] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.333 [2024-04-24 21:25:55.113171] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:40.592 21:25:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:40.592 21:25:55 -- common/autotest_common.sh@850 -- # return 0 00:20:40.592 21:25:55 -- target/tls.sh@211 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:40.592 Running I/O for 10 seconds... 00:20:50.574 00:20:50.574 Latency(us) 00:20:50.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.575 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:50.575 Verification LBA range: start 0x0 length 0x2000 00:20:50.575 TLSTESTn1 : 10.06 3821.91 14.93 0.00 0.00 33394.11 4725.49 76711.61 00:20:50.575 =================================================================================================================== 00:20:50.575 Total : 3821.91 14.93 0.00 0.00 33394.11 4725.49 76711.61 00:20:50.575 0 00:20:50.833 21:26:05 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.833 21:26:05 -- target/tls.sh@214 -- # killprocess 1257881 00:20:50.833 21:26:05 -- common/autotest_common.sh@936 -- # '[' -z 1257881 ']' 00:20:50.833 21:26:05 -- common/autotest_common.sh@940 -- # kill -0 1257881 00:20:50.833 21:26:05 -- common/autotest_common.sh@941 -- # uname 00:20:50.833 21:26:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:50.833 21:26:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1257881 00:20:50.833 21:26:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:50.833 21:26:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:50.833 21:26:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1257881' 00:20:50.833 killing process with pid 1257881 00:20:50.833 21:26:05 -- common/autotest_common.sh@955 -- # kill 1257881 00:20:50.833 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.833 00:20:50.833 Latency(us) 00:20:50.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.833 =================================================================================================================== 00:20:50.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.833 [2024-04-24 21:26:05.591385] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:50.833 21:26:05 -- common/autotest_common.sh@960 -- # wait 1257881 00:20:51.092 21:26:05 -- target/tls.sh@215 -- # killprocess 1257724 00:20:51.092 21:26:05 -- common/autotest_common.sh@936 -- # '[' -z 1257724 ']' 00:20:51.092 21:26:05 -- common/autotest_common.sh@940 -- # kill -0 1257724 00:20:51.092 21:26:05 -- common/autotest_common.sh@941 -- # uname 00:20:51.092 21:26:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:51.092 21:26:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1257724 00:20:51.092 21:26:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:51.092 21:26:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:51.092 21:26:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1257724' 00:20:51.092 killing process with pid 1257724 00:20:51.092 21:26:06 -- common/autotest_common.sh@955 -- # kill 1257724 00:20:51.092 [2024-04-24 21:26:06.011947] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:51.092 21:26:06 -- common/autotest_common.sh@960 -- # wait 1257724 00:20:51.660 21:26:06 -- target/tls.sh@218 -- # nvmfappstart 00:20:51.660 21:26:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:51.660 21:26:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:51.660 21:26:06 -- common/autotest_common.sh@10 -- # set +x 00:20:51.660 21:26:06 -- nvmf/common.sh@470 -- # nvmfpid=1260228 00:20:51.660 21:26:06 -- nvmf/common.sh@471 -- # waitforlisten 1260228 00:20:51.660 21:26:06 -- common/autotest_common.sh@817 -- # '[' -z 1260228 ']' 00:20:51.660 21:26:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.660 21:26:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:51.660 21:26:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.660 21:26:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:51.660 21:26:06 -- common/autotest_common.sh@10 -- # set +x 00:20:51.660 21:26:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:51.660 [2024-04-24 21:26:06.612515] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:51.660 [2024-04-24 21:26:06.612630] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.920 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.920 [2024-04-24 21:26:06.739709] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.920 [2024-04-24 21:26:06.831187] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.921 [2024-04-24 21:26:06.831223] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.921 [2024-04-24 21:26:06.831232] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.921 [2024-04-24 21:26:06.831245] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.921 [2024-04-24 21:26:06.831252] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.921 [2024-04-24 21:26:06.831282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.491 21:26:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:52.491 21:26:07 -- common/autotest_common.sh@850 -- # return 0 00:20:52.491 21:26:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:52.491 21:26:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:52.491 21:26:07 -- common/autotest_common.sh@10 -- # set +x 00:20:52.491 21:26:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.491 21:26:07 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.hXGadZvObd 00:20:52.491 21:26:07 -- target/tls.sh@49 -- # local key=/tmp/tmp.hXGadZvObd 00:20:52.491 21:26:07 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:52.491 [2024-04-24 21:26:07.455950] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.749 21:26:07 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:52.749 21:26:07 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:53.008 [2024-04-24 21:26:07.723994] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.008 [2024-04-24 21:26:07.724217] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.008 21:26:07 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:53.008 malloc0 00:20:53.008 21:26:07 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:53.267 21:26:08 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hXGadZvObd 00:20:53.267 [2024-04-24 21:26:08.150192] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:53.267 21:26:08 -- target/tls.sh@222 -- # bdevperf_pid=1260584 00:20:53.267 21:26:08 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:53.267 21:26:08 -- target/tls.sh@220 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:53.267 21:26:08 -- target/tls.sh@225 -- # waitforlisten 1260584 /var/tmp/bdevperf.sock 00:20:53.267 21:26:08 -- common/autotest_common.sh@817 -- # '[' -z 1260584 ']' 00:20:53.267 21:26:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.267 21:26:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:53.267 21:26:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.267 21:26:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:53.267 21:26:08 -- common/autotest_common.sh@10 -- # set +x 00:20:53.526 [2024-04-24 21:26:08.236150] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:53.527 [2024-04-24 21:26:08.236262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260584 ] 00:20:53.527 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.527 [2024-04-24 21:26:08.348106] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.527 [2024-04-24 21:26:08.442392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.098 21:26:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:54.098 21:26:08 -- common/autotest_common.sh@850 -- # return 0 00:20:54.098 21:26:08 -- target/tls.sh@227 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hXGadZvObd 00:20:54.357 21:26:09 -- target/tls.sh@228 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:54.357 [2024-04-24 21:26:09.236871] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.357 nvme0n1 00:20:54.615 21:26:09 -- target/tls.sh@232 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:54.615 Running I/O for 1 seconds... 00:20:55.554 00:20:55.554 Latency(us) 00:20:55.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.554 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:55.554 Verification LBA range: start 0x0 length 0x2000 00:20:55.554 nvme0n1 : 1.01 5156.62 20.14 0.00 0.00 24634.98 6346.64 50221.27 00:20:55.554 =================================================================================================================== 00:20:55.554 Total : 5156.62 20.14 0.00 0.00 24634.98 6346.64 50221.27 00:20:55.554 0 00:20:55.554 21:26:10 -- target/tls.sh@234 -- # killprocess 1260584 00:20:55.554 21:26:10 -- common/autotest_common.sh@936 -- # '[' -z 1260584 ']' 00:20:55.554 21:26:10 -- common/autotest_common.sh@940 -- # kill -0 1260584 00:20:55.554 21:26:10 -- common/autotest_common.sh@941 -- # uname 00:20:55.554 21:26:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:55.554 21:26:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1260584 00:20:55.554 21:26:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:55.554 21:26:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:55.554 21:26:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1260584' 00:20:55.554 killing process with pid 1260584 00:20:55.554 21:26:10 -- common/autotest_common.sh@955 -- # kill 1260584 00:20:55.554 Received shutdown signal, test time was about 1.000000 seconds 00:20:55.554 00:20:55.554 Latency(us) 00:20:55.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.555 =================================================================================================================== 00:20:55.555 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.555 21:26:10 -- common/autotest_common.sh@960 -- # wait 1260584 00:20:56.123 21:26:10 -- target/tls.sh@235 -- # killprocess 1260228 00:20:56.123 21:26:10 -- common/autotest_common.sh@936 -- # '[' -z 1260228 ']' 00:20:56.123 21:26:10 -- common/autotest_common.sh@940 -- # kill -0 1260228 00:20:56.123 21:26:10 -- common/autotest_common.sh@941 -- # uname 00:20:56.123 21:26:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:56.123 21:26:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1260228 00:20:56.123 21:26:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:56.123 21:26:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:56.123 21:26:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1260228' 00:20:56.123 killing process with pid 1260228 00:20:56.124 21:26:10 -- common/autotest_common.sh@955 -- # kill 1260228 00:20:56.124 [2024-04-24 21:26:10.883105] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:56.124 21:26:10 -- common/autotest_common.sh@960 -- # wait 1260228 00:20:56.690 21:26:11 -- target/tls.sh@238 -- # nvmfappstart 00:20:56.690 21:26:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:56.690 21:26:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:56.690 21:26:11 -- common/autotest_common.sh@10 -- # set +x 00:20:56.690 21:26:11 -- nvmf/common.sh@470 -- # nvmfpid=1261207 00:20:56.690 21:26:11 -- nvmf/common.sh@471 -- # waitforlisten 1261207 00:20:56.690 21:26:11 -- common/autotest_common.sh@817 -- # '[' -z 1261207 ']' 00:20:56.690 21:26:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.690 21:26:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:56.690 21:26:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:56.690 21:26:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.690 21:26:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:56.690 21:26:11 -- common/autotest_common.sh@10 -- # set +x 00:20:56.690 [2024-04-24 21:26:11.457760] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:56.690 [2024-04-24 21:26:11.457865] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.690 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.690 [2024-04-24 21:26:11.576258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.950 [2024-04-24 21:26:11.666932] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.950 [2024-04-24 21:26:11.666968] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.950 [2024-04-24 21:26:11.666979] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.950 [2024-04-24 21:26:11.666988] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.950 [2024-04-24 21:26:11.666995] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.950 [2024-04-24 21:26:11.667023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.209 21:26:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:57.209 21:26:12 -- common/autotest_common.sh@850 -- # return 0 00:20:57.209 21:26:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:57.209 21:26:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:57.209 21:26:12 -- common/autotest_common.sh@10 -- # set +x 00:20:57.209 21:26:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.209 21:26:12 -- target/tls.sh@239 -- # rpc_cmd 00:20:57.209 21:26:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.209 21:26:12 -- common/autotest_common.sh@10 -- # set +x 00:20:57.469 [2024-04-24 21:26:12.176396] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.469 malloc0 00:20:57.469 [2024-04-24 21:26:12.222092] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.469 [2024-04-24 21:26:12.222308] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.469 21:26:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.469 21:26:12 -- target/tls.sh@252 -- # bdevperf_pid=1261247 00:20:57.469 21:26:12 -- target/tls.sh@254 -- # waitforlisten 1261247 /var/tmp/bdevperf.sock 00:20:57.469 21:26:12 -- common/autotest_common.sh@817 -- # '[' -z 1261247 ']' 00:20:57.469 21:26:12 -- target/tls.sh@250 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:57.469 21:26:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.469 21:26:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:57.469 21:26:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.469 21:26:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:57.469 21:26:12 -- common/autotest_common.sh@10 -- # set +x 00:20:57.469 [2024-04-24 21:26:12.320678] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:20:57.469 [2024-04-24 21:26:12.320786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261247 ] 00:20:57.469 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.728 [2024-04-24 21:26:12.436517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.728 [2024-04-24 21:26:12.533192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.295 21:26:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:58.295 21:26:13 -- common/autotest_common.sh@850 -- # return 0 00:20:58.295 21:26:13 -- target/tls.sh@255 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hXGadZvObd 00:20:58.295 21:26:13 -- target/tls.sh@256 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:58.553 [2024-04-24 21:26:13.261583] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.553 nvme0n1 00:20:58.553 21:26:13 -- target/tls.sh@260 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:58.553 Running I/O for 1 seconds... 00:20:59.493 00:20:59.493 Latency(us) 00:20:59.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.493 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:59.493 Verification LBA range: start 0x0 length 0x2000 00:20:59.493 nvme0n1 : 1.01 5587.84 21.83 0.00 0.00 22735.91 6001.72 28283.96 00:20:59.493 =================================================================================================================== 00:20:59.493 Total : 5587.84 21.83 0.00 0.00 22735.91 6001.72 28283.96 00:20:59.493 0 00:20:59.493 21:26:14 -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:59.493 21:26:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.493 21:26:14 -- common/autotest_common.sh@10 -- # set +x 00:20:59.752 21:26:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.752 21:26:14 -- target/tls.sh@263 -- # tgtcfg='{ 00:20:59.752 "subsystems": [ 00:20:59.752 { 00:20:59.752 "subsystem": "keyring", 00:20:59.752 "config": [ 00:20:59.752 { 00:20:59.753 "method": "keyring_file_add_key", 00:20:59.753 "params": { 00:20:59.753 "name": "key0", 00:20:59.753 "path": "/tmp/tmp.hXGadZvObd" 00:20:59.753 } 00:20:59.753 } 00:20:59.753 ] 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "subsystem": "iobuf", 00:20:59.753 "config": [ 00:20:59.753 { 00:20:59.753 "method": "iobuf_set_options", 00:20:59.753 "params": { 00:20:59.753 "small_pool_count": 8192, 00:20:59.753 "large_pool_count": 1024, 00:20:59.753 "small_bufsize": 8192, 00:20:59.753 "large_bufsize": 135168 00:20:59.753 } 00:20:59.753 } 00:20:59.753 ] 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "subsystem": "sock", 00:20:59.753 "config": [ 00:20:59.753 { 00:20:59.753 "method": "sock_impl_set_options", 00:20:59.753 "params": { 00:20:59.753 "impl_name": "posix", 00:20:59.753 "recv_buf_size": 2097152, 00:20:59.753 "send_buf_size": 2097152, 00:20:59.753 "enable_recv_pipe": true, 00:20:59.753 "enable_quickack": false, 00:20:59.753 "enable_placement_id": 0, 00:20:59.753 "enable_zerocopy_send_server": true, 00:20:59.753 "enable_zerocopy_send_client": false, 00:20:59.753 "zerocopy_threshold": 0, 00:20:59.753 "tls_version": 0, 00:20:59.753 "enable_ktls": false 00:20:59.753 } 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "method": "sock_impl_set_options", 00:20:59.753 "params": { 00:20:59.753 "impl_name": "ssl", 00:20:59.753 "recv_buf_size": 4096, 00:20:59.753 "send_buf_size": 4096, 00:20:59.753 "enable_recv_pipe": true, 00:20:59.753 "enable_quickack": false, 00:20:59.753 "enable_placement_id": 0, 00:20:59.753 "enable_zerocopy_send_server": true, 00:20:59.753 "enable_zerocopy_send_client": false, 00:20:59.753 "zerocopy_threshold": 0, 00:20:59.753 "tls_version": 0, 00:20:59.753 "enable_ktls": false 00:20:59.753 } 00:20:59.753 } 00:20:59.753 ] 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "subsystem": "vmd", 00:20:59.753 "config": [] 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "subsystem": "accel", 00:20:59.753 "config": [ 00:20:59.753 { 00:20:59.753 "method": "accel_set_options", 00:20:59.753 "params": { 00:20:59.753 "small_cache_size": 128, 00:20:59.753 "large_cache_size": 16, 00:20:59.753 "task_count": 2048, 00:20:59.753 "sequence_count": 2048, 00:20:59.753 "buf_count": 2048 00:20:59.753 } 00:20:59.753 } 00:20:59.753 ] 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "subsystem": "bdev", 00:20:59.753 "config": [ 00:20:59.753 { 00:20:59.753 "method": "bdev_set_options", 00:20:59.753 "params": { 00:20:59.753 "bdev_io_pool_size": 65535, 00:20:59.753 "bdev_io_cache_size": 256, 00:20:59.753 "bdev_auto_examine": true, 00:20:59.753 "iobuf_small_cache_size": 128, 00:20:59.753 "iobuf_large_cache_size": 16 00:20:59.753 } 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "method": "bdev_raid_set_options", 00:20:59.753 "params": { 00:20:59.753 "process_window_size_kb": 1024 00:20:59.753 } 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "method": "bdev_iscsi_set_options", 00:20:59.753 "params": { 00:20:59.753 "timeout_sec": 30 00:20:59.753 } 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "method": "bdev_nvme_set_options", 00:20:59.753 "params": { 00:20:59.753 "action_on_timeout": "none", 00:20:59.753 "timeout_us": 0, 00:20:59.753 "timeout_admin_us": 0, 00:20:59.753 "keep_alive_timeout_ms": 10000, 00:20:59.753 "arbitration_burst": 0, 00:20:59.753 "low_priority_weight": 0, 00:20:59.753 "medium_priority_weight": 0, 00:20:59.753 "high_priority_weight": 0, 00:20:59.753 "nvme_adminq_poll_period_us": 10000, 00:20:59.753 "nvme_ioq_poll_period_us": 0, 00:20:59.753 "io_queue_requests": 0, 00:20:59.753 "delay_cmd_submit": true, 00:20:59.753 "transport_retry_count": 4, 00:20:59.753 "bdev_retry_count": 3, 00:20:59.753 "transport_ack_timeout": 0, 00:20:59.753 "ctrlr_loss_timeout_sec": 0, 00:20:59.753 "reconnect_delay_sec": 0, 00:20:59.753 "fast_io_fail_timeout_sec": 0, 00:20:59.753 "disable_auto_failback": false, 00:20:59.753 "generate_uuids": false, 00:20:59.753 "transport_tos": 0, 00:20:59.753 "nvme_error_stat": false, 00:20:59.753 "rdma_srq_size": 0, 00:20:59.753 "io_path_stat": false, 00:20:59.753 "allow_accel_sequence": false, 00:20:59.753 "rdma_max_cq_size": 0, 00:20:59.753 "rdma_cm_event_timeout_ms": 0, 00:20:59.753 "dhchap_digests": [ 00:20:59.753 "sha256", 00:20:59.753 "sha384", 00:20:59.753 "sha512" 00:20:59.753 ], 00:20:59.753 "dhchap_dhgroups": [ 00:20:59.753 "null", 00:20:59.753 "ffdhe2048", 00:20:59.753 "ffdhe3072", 00:20:59.753 "ffdhe4096", 00:20:59.753 "ffdhe6144", 00:20:59.753 "ffdhe8192" 00:20:59.753 ] 00:20:59.753 } 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "method": "bdev_nvme_set_hotplug", 00:20:59.753 "params": { 00:20:59.753 "period_us": 100000, 00:20:59.753 "enable": false 00:20:59.753 } 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "method": "bdev_malloc_create", 00:20:59.753 "params": { 00:20:59.753 "name": "malloc0", 00:20:59.753 "num_blocks": 8192, 00:20:59.753 "block_size": 4096, 00:20:59.753 "physical_block_size": 4096, 00:20:59.753 "uuid": "bf905156-6c88-43d4-8b9f-de1d9f881886", 00:20:59.753 "optimal_io_boundary": 0 00:20:59.753 } 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "method": "bdev_wait_for_examine" 00:20:59.753 } 00:20:59.753 ] 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "subsystem": "nbd", 00:20:59.753 "config": [] 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "subsystem": "scheduler", 00:20:59.753 "config": [ 00:20:59.753 { 00:20:59.753 "method": "framework_set_scheduler", 00:20:59.753 "params": { 00:20:59.753 "name": "static" 00:20:59.753 } 00:20:59.753 } 00:20:59.753 ] 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "subsystem": "nvmf", 00:20:59.753 "config": [ 00:20:59.753 { 00:20:59.753 "method": "nvmf_set_config", 00:20:59.753 "params": { 00:20:59.753 "discovery_filter": "match_any", 00:20:59.753 "admin_cmd_passthru": { 00:20:59.753 "identify_ctrlr": false 00:20:59.753 } 00:20:59.753 } 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "method": "nvmf_set_max_subsystems", 00:20:59.753 "params": { 00:20:59.753 "max_subsystems": 1024 00:20:59.753 } 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "method": "nvmf_set_crdt", 00:20:59.753 "params": { 00:20:59.753 "crdt1": 0, 00:20:59.753 "crdt2": 0, 00:20:59.753 "crdt3": 0 00:20:59.753 } 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "method": "nvmf_create_transport", 00:20:59.753 "params": { 00:20:59.753 "trtype": "TCP", 00:20:59.753 "max_queue_depth": 128, 00:20:59.753 "max_io_qpairs_per_ctrlr": 127, 00:20:59.753 "in_capsule_data_size": 4096, 00:20:59.753 "max_io_size": 131072, 00:20:59.753 "io_unit_size": 131072, 00:20:59.753 "max_aq_depth": 128, 00:20:59.753 "num_shared_buffers": 511, 00:20:59.753 "buf_cache_size": 4294967295, 00:20:59.753 "dif_insert_or_strip": false, 00:20:59.753 "zcopy": false, 00:20:59.753 "c2h_success": false, 00:20:59.753 "sock_priority": 0, 00:20:59.753 "abort_timeout_sec": 1, 00:20:59.753 "ack_timeout": 0 00:20:59.753 } 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "method": "nvmf_create_subsystem", 00:20:59.753 "params": { 00:20:59.753 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.753 "allow_any_host": false, 00:20:59.753 "serial_number": "00000000000000000000", 00:20:59.753 "model_number": "SPDK bdev Controller", 00:20:59.753 "max_namespaces": 32, 00:20:59.753 "min_cntlid": 1, 00:20:59.753 "max_cntlid": 65519, 00:20:59.753 "ana_reporting": false 00:20:59.753 } 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "method": "nvmf_subsystem_add_host", 00:20:59.753 "params": { 00:20:59.753 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.753 "host": "nqn.2016-06.io.spdk:host1", 00:20:59.753 "psk": "key0" 00:20:59.753 } 00:20:59.753 }, 00:20:59.753 { 00:20:59.753 "method": "nvmf_subsystem_add_ns", 00:20:59.753 "params": { 00:20:59.753 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.754 "namespace": { 00:20:59.754 "nsid": 1, 00:20:59.754 "bdev_name": "malloc0", 00:20:59.754 "nguid": "BF9051566C8843D48B9FDE1D9F881886", 00:20:59.754 "uuid": "bf905156-6c88-43d4-8b9f-de1d9f881886", 00:20:59.754 "no_auto_visible": false 00:20:59.754 } 00:20:59.754 } 00:20:59.754 }, 00:20:59.754 { 00:20:59.754 "method": "nvmf_subsystem_add_listener", 00:20:59.754 "params": { 00:20:59.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.754 "listen_address": { 00:20:59.754 "trtype": "TCP", 00:20:59.754 "adrfam": "IPv4", 00:20:59.754 "traddr": "10.0.0.2", 00:20:59.754 "trsvcid": "4420" 00:20:59.754 }, 00:20:59.754 "secure_channel": true 00:20:59.754 } 00:20:59.754 } 00:20:59.754 ] 00:20:59.754 } 00:20:59.754 ] 00:20:59.754 }' 00:20:59.754 21:26:14 -- target/tls.sh@264 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:00.013 21:26:14 -- target/tls.sh@264 -- # bperfcfg='{ 00:21:00.013 "subsystems": [ 00:21:00.013 { 00:21:00.013 "subsystem": "keyring", 00:21:00.013 "config": [ 00:21:00.013 { 00:21:00.013 "method": "keyring_file_add_key", 00:21:00.013 "params": { 00:21:00.013 "name": "key0", 00:21:00.013 "path": "/tmp/tmp.hXGadZvObd" 00:21:00.013 } 00:21:00.013 } 00:21:00.013 ] 00:21:00.013 }, 00:21:00.013 { 00:21:00.013 "subsystem": "iobuf", 00:21:00.013 "config": [ 00:21:00.013 { 00:21:00.013 "method": "iobuf_set_options", 00:21:00.013 "params": { 00:21:00.013 "small_pool_count": 8192, 00:21:00.013 "large_pool_count": 1024, 00:21:00.013 "small_bufsize": 8192, 00:21:00.013 "large_bufsize": 135168 00:21:00.013 } 00:21:00.013 } 00:21:00.013 ] 00:21:00.013 }, 00:21:00.013 { 00:21:00.013 "subsystem": "sock", 00:21:00.013 "config": [ 00:21:00.013 { 00:21:00.013 "method": "sock_impl_set_options", 00:21:00.013 "params": { 00:21:00.013 "impl_name": "posix", 00:21:00.013 "recv_buf_size": 2097152, 00:21:00.013 "send_buf_size": 2097152, 00:21:00.013 "enable_recv_pipe": true, 00:21:00.013 "enable_quickack": false, 00:21:00.013 "enable_placement_id": 0, 00:21:00.013 "enable_zerocopy_send_server": true, 00:21:00.013 "enable_zerocopy_send_client": false, 00:21:00.013 "zerocopy_threshold": 0, 00:21:00.013 "tls_version": 0, 00:21:00.013 "enable_ktls": false 00:21:00.014 } 00:21:00.014 }, 00:21:00.014 { 00:21:00.014 "method": "sock_impl_set_options", 00:21:00.014 "params": { 00:21:00.014 "impl_name": "ssl", 00:21:00.014 "recv_buf_size": 4096, 00:21:00.014 "send_buf_size": 4096, 00:21:00.014 "enable_recv_pipe": true, 00:21:00.014 "enable_quickack": false, 00:21:00.014 "enable_placement_id": 0, 00:21:00.014 "enable_zerocopy_send_server": true, 00:21:00.014 "enable_zerocopy_send_client": false, 00:21:00.014 "zerocopy_threshold": 0, 00:21:00.014 "tls_version": 0, 00:21:00.014 "enable_ktls": false 00:21:00.014 } 00:21:00.014 } 00:21:00.014 ] 00:21:00.014 }, 00:21:00.014 { 00:21:00.014 "subsystem": "vmd", 00:21:00.014 "config": [] 00:21:00.014 }, 00:21:00.014 { 00:21:00.014 "subsystem": "accel", 00:21:00.014 "config": [ 00:21:00.014 { 00:21:00.014 "method": "accel_set_options", 00:21:00.014 "params": { 00:21:00.014 "small_cache_size": 128, 00:21:00.014 "large_cache_size": 16, 00:21:00.014 "task_count": 2048, 00:21:00.014 "sequence_count": 2048, 00:21:00.014 "buf_count": 2048 00:21:00.014 } 00:21:00.014 } 00:21:00.014 ] 00:21:00.014 }, 00:21:00.014 { 00:21:00.014 "subsystem": "bdev", 00:21:00.014 "config": [ 00:21:00.014 { 00:21:00.014 "method": "bdev_set_options", 00:21:00.014 "params": { 00:21:00.014 "bdev_io_pool_size": 65535, 00:21:00.014 "bdev_io_cache_size": 256, 00:21:00.014 "bdev_auto_examine": true, 00:21:00.014 "iobuf_small_cache_size": 128, 00:21:00.014 "iobuf_large_cache_size": 16 00:21:00.014 } 00:21:00.014 }, 00:21:00.014 { 00:21:00.014 "method": "bdev_raid_set_options", 00:21:00.014 "params": { 00:21:00.014 "process_window_size_kb": 1024 00:21:00.014 } 00:21:00.014 }, 00:21:00.014 { 00:21:00.014 "method": "bdev_iscsi_set_options", 00:21:00.014 "params": { 00:21:00.014 "timeout_sec": 30 00:21:00.014 } 00:21:00.014 }, 00:21:00.014 { 00:21:00.014 "method": "bdev_nvme_set_options", 00:21:00.014 "params": { 00:21:00.014 "action_on_timeout": "none", 00:21:00.014 "timeout_us": 0, 00:21:00.014 "timeout_admin_us": 0, 00:21:00.014 "keep_alive_timeout_ms": 10000, 00:21:00.014 "arbitration_burst": 0, 00:21:00.014 "low_priority_weight": 0, 00:21:00.014 "medium_priority_weight": 0, 00:21:00.014 "high_priority_weight": 0, 00:21:00.014 "nvme_adminq_poll_period_us": 10000, 00:21:00.014 "nvme_ioq_poll_period_us": 0, 00:21:00.014 "io_queue_requests": 512, 00:21:00.014 "delay_cmd_submit": true, 00:21:00.014 "transport_retry_count": 4, 00:21:00.014 "bdev_retry_count": 3, 00:21:00.014 "transport_ack_timeout": 0, 00:21:00.014 "ctrlr_loss_timeout_sec": 0, 00:21:00.014 "reconnect_delay_sec": 0, 00:21:00.014 "fast_io_fail_timeout_sec": 0, 00:21:00.014 "disable_auto_failback": false, 00:21:00.014 "generate_uuids": false, 00:21:00.014 "transport_tos": 0, 00:21:00.014 "nvme_error_stat": false, 00:21:00.014 "rdma_srq_size": 0, 00:21:00.014 "io_path_stat": false, 00:21:00.014 "allow_accel_sequence": false, 00:21:00.014 "rdma_max_cq_size": 0, 00:21:00.014 "rdma_cm_event_timeout_ms": 0, 00:21:00.014 "dhchap_digests": [ 00:21:00.014 "sha256", 00:21:00.014 "sha384", 00:21:00.014 "sha512" 00:21:00.014 ], 00:21:00.014 "dhchap_dhgroups": [ 00:21:00.014 "null", 00:21:00.014 "ffdhe2048", 00:21:00.014 "ffdhe3072", 00:21:00.014 "ffdhe4096", 00:21:00.014 "ffdhe6144", 00:21:00.014 "ffdhe8192" 00:21:00.014 ] 00:21:00.014 } 00:21:00.014 }, 00:21:00.014 { 00:21:00.014 "method": "bdev_nvme_attach_controller", 00:21:00.014 "params": { 00:21:00.014 "name": "nvme0", 00:21:00.014 "trtype": "TCP", 00:21:00.014 "adrfam": "IPv4", 00:21:00.014 "traddr": "10.0.0.2", 00:21:00.014 "trsvcid": "4420", 00:21:00.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.014 "prchk_reftag": false, 00:21:00.014 "prchk_guard": false, 00:21:00.014 "ctrlr_loss_timeout_sec": 0, 00:21:00.014 "reconnect_delay_sec": 0, 00:21:00.014 "fast_io_fail_timeout_sec": 0, 00:21:00.014 "psk": "key0", 00:21:00.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.014 "hdgst": false, 00:21:00.014 "ddgst": false 00:21:00.014 } 00:21:00.014 }, 00:21:00.014 { 00:21:00.014 "method": "bdev_nvme_set_hotplug", 00:21:00.014 "params": { 00:21:00.014 "period_us": 100000, 00:21:00.014 "enable": false 00:21:00.014 } 00:21:00.014 }, 00:21:00.014 { 00:21:00.014 "method": "bdev_enable_histogram", 00:21:00.014 "params": { 00:21:00.014 "name": "nvme0n1", 00:21:00.014 "enable": true 00:21:00.014 } 00:21:00.014 }, 00:21:00.014 { 00:21:00.014 "method": "bdev_wait_for_examine" 00:21:00.014 } 00:21:00.014 ] 00:21:00.014 }, 00:21:00.014 { 00:21:00.014 "subsystem": "nbd", 00:21:00.014 "config": [] 00:21:00.014 } 00:21:00.014 ] 00:21:00.014 }' 00:21:00.014 21:26:14 -- target/tls.sh@266 -- # killprocess 1261247 00:21:00.014 21:26:14 -- common/autotest_common.sh@936 -- # '[' -z 1261247 ']' 00:21:00.014 21:26:14 -- common/autotest_common.sh@940 -- # kill -0 1261247 00:21:00.014 21:26:14 -- common/autotest_common.sh@941 -- # uname 00:21:00.014 21:26:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:00.014 21:26:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1261247 00:21:00.014 21:26:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:00.014 21:26:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:00.014 21:26:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1261247' 00:21:00.014 killing process with pid 1261247 00:21:00.014 21:26:14 -- common/autotest_common.sh@955 -- # kill 1261247 00:21:00.014 Received shutdown signal, test time was about 1.000000 seconds 00:21:00.014 00:21:00.014 Latency(us) 00:21:00.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.014 =================================================================================================================== 00:21:00.014 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.014 21:26:14 -- common/autotest_common.sh@960 -- # wait 1261247 00:21:00.271 21:26:15 -- target/tls.sh@267 -- # killprocess 1261207 00:21:00.271 21:26:15 -- common/autotest_common.sh@936 -- # '[' -z 1261207 ']' 00:21:00.271 21:26:15 -- common/autotest_common.sh@940 -- # kill -0 1261207 00:21:00.272 21:26:15 -- common/autotest_common.sh@941 -- # uname 00:21:00.272 21:26:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:00.272 21:26:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1261207 00:21:00.272 21:26:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:00.272 21:26:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:00.272 21:26:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1261207' 00:21:00.272 killing process with pid 1261207 00:21:00.272 21:26:15 -- common/autotest_common.sh@955 -- # kill 1261207 00:21:00.272 21:26:15 -- common/autotest_common.sh@960 -- # wait 1261207 00:21:00.839 21:26:15 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:00.839 21:26:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:00.839 21:26:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:00.839 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:21:00.839 21:26:15 -- target/tls.sh@269 -- # echo '{ 00:21:00.839 "subsystems": [ 00:21:00.839 { 00:21:00.839 "subsystem": "keyring", 00:21:00.839 "config": [ 00:21:00.839 { 00:21:00.839 "method": "keyring_file_add_key", 00:21:00.839 "params": { 00:21:00.839 "name": "key0", 00:21:00.839 "path": "/tmp/tmp.hXGadZvObd" 00:21:00.839 } 00:21:00.839 } 00:21:00.839 ] 00:21:00.839 }, 00:21:00.839 { 00:21:00.839 "subsystem": "iobuf", 00:21:00.839 "config": [ 00:21:00.839 { 00:21:00.839 "method": "iobuf_set_options", 00:21:00.839 "params": { 00:21:00.839 "small_pool_count": 8192, 00:21:00.839 "large_pool_count": 1024, 00:21:00.839 "small_bufsize": 8192, 00:21:00.839 "large_bufsize": 135168 00:21:00.839 } 00:21:00.839 } 00:21:00.839 ] 00:21:00.839 }, 00:21:00.839 { 00:21:00.839 "subsystem": "sock", 00:21:00.839 "config": [ 00:21:00.839 { 00:21:00.839 "method": "sock_impl_set_options", 00:21:00.839 "params": { 00:21:00.839 "impl_name": "posix", 00:21:00.839 "recv_buf_size": 2097152, 00:21:00.839 "send_buf_size": 2097152, 00:21:00.839 "enable_recv_pipe": true, 00:21:00.839 "enable_quickack": false, 00:21:00.839 "enable_placement_id": 0, 00:21:00.839 "enable_zerocopy_send_server": true, 00:21:00.839 "enable_zerocopy_send_client": false, 00:21:00.839 "zerocopy_threshold": 0, 00:21:00.839 "tls_version": 0, 00:21:00.839 "enable_ktls": false 00:21:00.839 } 00:21:00.839 }, 00:21:00.839 { 00:21:00.839 "method": "sock_impl_set_options", 00:21:00.839 "params": { 00:21:00.839 "impl_name": "ssl", 00:21:00.839 "recv_buf_size": 4096, 00:21:00.839 "send_buf_size": 4096, 00:21:00.839 "enable_recv_pipe": true, 00:21:00.839 "enable_quickack": false, 00:21:00.839 "enable_placement_id": 0, 00:21:00.839 "enable_zerocopy_send_server": true, 00:21:00.839 "enable_zerocopy_send_client": false, 00:21:00.839 "zerocopy_threshold": 0, 00:21:00.839 "tls_version": 0, 00:21:00.839 "enable_ktls": false 00:21:00.839 } 00:21:00.839 } 00:21:00.839 ] 00:21:00.839 }, 00:21:00.839 { 00:21:00.839 "subsystem": "vmd", 00:21:00.839 "config": [] 00:21:00.839 }, 00:21:00.839 { 00:21:00.839 "subsystem": "accel", 00:21:00.839 "config": [ 00:21:00.839 { 00:21:00.839 "method": "accel_set_options", 00:21:00.839 "params": { 00:21:00.839 "small_cache_size": 128, 00:21:00.839 "large_cache_size": 16, 00:21:00.839 "task_count": 2048, 00:21:00.839 "sequence_count": 2048, 00:21:00.839 "buf_count": 2048 00:21:00.839 } 00:21:00.839 } 00:21:00.839 ] 00:21:00.839 }, 00:21:00.839 { 00:21:00.839 "subsystem": "bdev", 00:21:00.839 "config": [ 00:21:00.839 { 00:21:00.840 "method": "bdev_set_options", 00:21:00.840 "params": { 00:21:00.840 "bdev_io_pool_size": 65535, 00:21:00.840 "bdev_io_cache_size": 256, 00:21:00.840 "bdev_auto_examine": true, 00:21:00.840 "iobuf_small_cache_size": 128, 00:21:00.840 "iobuf_large_cache_size": 16 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "bdev_raid_set_options", 00:21:00.840 "params": { 00:21:00.840 "process_window_size_kb": 1024 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "bdev_iscsi_set_options", 00:21:00.840 "params": { 00:21:00.840 "timeout_sec": 30 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "bdev_nvme_set_options", 00:21:00.840 "params": { 00:21:00.840 "action_on_timeout": "none", 00:21:00.840 "timeout_us": 0, 00:21:00.840 "timeout_admin_us": 0, 00:21:00.840 "keep_alive_timeout_ms": 10000, 00:21:00.840 "arbitration_burst": 0, 00:21:00.840 "low_priority_weight": 0, 00:21:00.840 "medium_priority_weight": 0, 00:21:00.840 "high_priority_weight": 0, 00:21:00.840 "nvme_adminq_poll_period_us": 10000, 00:21:00.840 "nvme_ioq_poll_period_us": 0, 00:21:00.840 "io_queue_requests": 0, 00:21:00.840 "delay_cmd_submit": true, 00:21:00.840 "transport_retry_count": 4, 00:21:00.840 "bdev_retry_count": 3, 00:21:00.840 "transport_ack_timeout": 0, 00:21:00.840 "ctrlr_loss_timeout_sec": 0, 00:21:00.840 "reconnect_delay_sec": 0, 00:21:00.840 "fast_io_fail_timeout_sec": 0, 00:21:00.840 "disable_auto_failback": false, 00:21:00.840 "generate_uuids": false, 00:21:00.840 "transport_tos": 0, 00:21:00.840 "nvme_error_stat": false, 00:21:00.840 "rdma_srq_size": 0, 00:21:00.840 "io_path_stat": false, 00:21:00.840 "allow_accel_sequence": false, 00:21:00.840 "rdma_max_cq_size": 0, 00:21:00.840 "rdma_cm_event_timeout_ms": 0, 00:21:00.840 "dhchap_digests": [ 00:21:00.840 "sha256", 00:21:00.840 "sha384", 00:21:00.840 "sha512" 00:21:00.840 ], 00:21:00.840 "dhchap_dhgroups": [ 00:21:00.840 "null", 00:21:00.840 "ffdhe2048", 00:21:00.840 "ffdhe3072", 00:21:00.840 "ffdhe4096", 00:21:00.840 "ffdhe6144", 00:21:00.840 "ffdhe8192" 00:21:00.840 ] 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "bdev_nvme_set_hotplug", 00:21:00.840 "params": { 00:21:00.840 "period_us": 100000, 00:21:00.840 "enable": false 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "bdev_malloc_create", 00:21:00.840 "params": { 00:21:00.840 "name": "malloc0", 00:21:00.840 "num_blocks": 8192, 00:21:00.840 "block_size": 4096, 00:21:00.840 "physical_block_size": 4096, 00:21:00.840 "uuid": "bf905156-6c88-43d4-8b9f-de1d9f881886", 00:21:00.840 "optimal_io_boundary": 0 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "bdev_wait_for_examine" 00:21:00.840 } 00:21:00.840 ] 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "subsystem": "nbd", 00:21:00.840 "config": [] 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "subsystem": "scheduler", 00:21:00.840 "config": [ 00:21:00.840 { 00:21:00.840 "method": "framework_set_scheduler", 00:21:00.840 "params": { 00:21:00.840 "name": "static" 00:21:00.840 } 00:21:00.840 } 00:21:00.840 ] 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "subsystem": "nvmf", 00:21:00.840 "config": [ 00:21:00.840 { 00:21:00.840 "method": "nvmf_set_config", 00:21:00.840 "params": { 00:21:00.840 "discovery_filter": "match_any", 00:21:00.840 "admin_cmd_passthru": { 00:21:00.840 "identify_ctrlr": false 00:21:00.840 } 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "nvmf_set_max_subsystems", 00:21:00.840 "params": { 00:21:00.840 "max_subsystems": 1024 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "nvmf_set_crdt", 00:21:00.840 "params": { 00:21:00.840 "crdt1": 0, 00:21:00.840 "crdt2": 0, 00:21:00.840 "crdt3": 0 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "nvmf_create_transport", 00:21:00.840 "params": { 00:21:00.840 "trtype": "TCP", 00:21:00.840 "max_queue_depth": 128, 00:21:00.840 "max_io_qpairs_per_ctrlr": 127, 00:21:00.840 "in_capsule_data_size": 4096, 00:21:00.840 "max_io_size": 131072, 00:21:00.840 "io_unit_size": 131072, 00:21:00.840 "max_aq_depth": 128, 00:21:00.840 "num_shared_buffers": 511, 00:21:00.840 "buf_cache_size": 4294967295, 00:21:00.840 "dif_insert_or_strip": false, 00:21:00.840 "zcopy": false, 00:21:00.840 "c2h_success": false, 00:21:00.840 "sock_priority": 0, 00:21:00.840 "abort_timeout_sec": 1, 00:21:00.840 "ack_timeout": 0 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "nvmf_create_subsystem", 00:21:00.840 "params": { 00:21:00.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.840 "allow_any_host": false, 00:21:00.840 "serial_number": "00000000000000000000", 00:21:00.840 "model_number": "SPDK bdev Controller", 00:21:00.840 "max_namespaces": 32, 00:21:00.840 "min_cntlid": 1, 00:21:00.840 "max_cntlid": 65519, 00:21:00.840 "ana_reporting": false 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "nvmf_subsystem_add_host", 00:21:00.840 "params": { 00:21:00.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.840 "host": "nqn.2016-06.io.spdk:host1", 00:21:00.840 "psk": "key0" 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "nvmf_subsystem_add_ns", 00:21:00.840 "params": { 00:21:00.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.840 "namespace": { 00:21:00.840 "nsid": 1, 00:21:00.840 "bdev_name": "malloc0", 00:21:00.840 "nguid": "BF9051566C8843D48B9FDE1D9F881886", 00:21:00.840 "uuid": "bf905156-6c88-43d4-8b9f-de1d9f881886", 00:21:00.840 "no_auto_visible": false 00:21:00.840 } 00:21:00.840 } 00:21:00.840 }, 00:21:00.840 { 00:21:00.840 "method": "nvmf_subsystem_add_listener", 00:21:00.840 "params": { 00:21:00.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.840 "listen_address": { 00:21:00.840 "trtype": "TCP", 00:21:00.840 "adrfam": "IPv4", 00:21:00.840 "traddr": "10.0.0.2", 00:21:00.840 "trsvcid": "4420" 00:21:00.840 }, 00:21:00.840 "secure_channel": true 00:21:00.840 } 00:21:00.840 } 00:21:00.840 ] 00:21:00.840 } 00:21:00.840 ] 00:21:00.840 }' 00:21:00.840 21:26:15 -- nvmf/common.sh@470 -- # nvmfpid=1261963 00:21:00.840 21:26:15 -- nvmf/common.sh@471 -- # waitforlisten 1261963 00:21:00.840 21:26:15 -- common/autotest_common.sh@817 -- # '[' -z 1261963 ']' 00:21:00.840 21:26:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:00.840 21:26:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.840 21:26:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:00.840 21:26:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.840 21:26:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:00.840 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:21:00.840 [2024-04-24 21:26:15.779675] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:21:00.840 [2024-04-24 21:26:15.779786] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.101 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.101 [2024-04-24 21:26:15.906214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.101 [2024-04-24 21:26:15.998121] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.101 [2024-04-24 21:26:15.998159] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.101 [2024-04-24 21:26:15.998169] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.101 [2024-04-24 21:26:15.998178] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.101 [2024-04-24 21:26:15.998185] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.101 [2024-04-24 21:26:15.998275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.361 [2024-04-24 21:26:16.284205] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.361 [2024-04-24 21:26:16.316186] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:01.361 [2024-04-24 21:26:16.316414] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.620 21:26:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:01.620 21:26:16 -- common/autotest_common.sh@850 -- # return 0 00:21:01.620 21:26:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:01.620 21:26:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:01.620 21:26:16 -- common/autotest_common.sh@10 -- # set +x 00:21:01.620 21:26:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.620 21:26:16 -- target/tls.sh@272 -- # bdevperf_pid=1262169 00:21:01.620 21:26:16 -- target/tls.sh@273 -- # waitforlisten 1262169 /var/tmp/bdevperf.sock 00:21:01.620 21:26:16 -- common/autotest_common.sh@817 -- # '[' -z 1262169 ']' 00:21:01.620 21:26:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.620 21:26:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:01.620 21:26:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.620 21:26:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:01.620 21:26:16 -- common/autotest_common.sh@10 -- # set +x 00:21:01.620 21:26:16 -- target/tls.sh@270 -- # echo '{ 00:21:01.620 "subsystems": [ 00:21:01.620 { 00:21:01.620 "subsystem": "keyring", 00:21:01.620 "config": [ 00:21:01.620 { 00:21:01.620 "method": "keyring_file_add_key", 00:21:01.620 "params": { 00:21:01.620 "name": "key0", 00:21:01.620 "path": "/tmp/tmp.hXGadZvObd" 00:21:01.620 } 00:21:01.620 } 00:21:01.620 ] 00:21:01.620 }, 00:21:01.620 { 00:21:01.620 "subsystem": "iobuf", 00:21:01.620 "config": [ 00:21:01.620 { 00:21:01.620 "method": "iobuf_set_options", 00:21:01.620 "params": { 00:21:01.620 "small_pool_count": 8192, 00:21:01.620 "large_pool_count": 1024, 00:21:01.620 "small_bufsize": 8192, 00:21:01.620 "large_bufsize": 135168 00:21:01.620 } 00:21:01.620 } 00:21:01.620 ] 00:21:01.620 }, 00:21:01.620 { 00:21:01.620 "subsystem": "sock", 00:21:01.620 "config": [ 00:21:01.620 { 00:21:01.620 "method": "sock_impl_set_options", 00:21:01.620 "params": { 00:21:01.620 "impl_name": "posix", 00:21:01.620 "recv_buf_size": 2097152, 00:21:01.620 "send_buf_size": 2097152, 00:21:01.620 "enable_recv_pipe": true, 00:21:01.620 "enable_quickack": false, 00:21:01.620 "enable_placement_id": 0, 00:21:01.620 "enable_zerocopy_send_server": true, 00:21:01.620 "enable_zerocopy_send_client": false, 00:21:01.620 "zerocopy_threshold": 0, 00:21:01.620 "tls_version": 0, 00:21:01.620 "enable_ktls": false 00:21:01.620 } 00:21:01.620 }, 00:21:01.620 { 00:21:01.620 "method": "sock_impl_set_options", 00:21:01.620 "params": { 00:21:01.620 "impl_name": "ssl", 00:21:01.620 "recv_buf_size": 4096, 00:21:01.620 "send_buf_size": 4096, 00:21:01.620 "enable_recv_pipe": true, 00:21:01.620 "enable_quickack": false, 00:21:01.620 "enable_placement_id": 0, 00:21:01.620 "enable_zerocopy_send_server": true, 00:21:01.620 "enable_zerocopy_send_client": false, 00:21:01.620 "zerocopy_threshold": 0, 00:21:01.620 "tls_version": 0, 00:21:01.620 "enable_ktls": false 00:21:01.620 } 00:21:01.620 } 00:21:01.620 ] 00:21:01.620 }, 00:21:01.620 { 00:21:01.620 "subsystem": "vmd", 00:21:01.620 "config": [] 00:21:01.620 }, 00:21:01.620 { 00:21:01.620 "subsystem": "accel", 00:21:01.620 "config": [ 00:21:01.620 { 00:21:01.620 "method": "accel_set_options", 00:21:01.620 "params": { 00:21:01.620 "small_cache_size": 128, 00:21:01.620 "large_cache_size": 16, 00:21:01.620 "task_count": 2048, 00:21:01.620 "sequence_count": 2048, 00:21:01.620 "buf_count": 2048 00:21:01.620 } 00:21:01.620 } 00:21:01.620 ] 00:21:01.620 }, 00:21:01.620 { 00:21:01.620 "subsystem": "bdev", 00:21:01.620 "config": [ 00:21:01.620 { 00:21:01.620 "method": "bdev_set_options", 00:21:01.620 "params": { 00:21:01.620 "bdev_io_pool_size": 65535, 00:21:01.620 "bdev_io_cache_size": 256, 00:21:01.620 "bdev_auto_examine": true, 00:21:01.620 "iobuf_small_cache_size": 128, 00:21:01.620 "iobuf_large_cache_size": 16 00:21:01.620 } 00:21:01.620 }, 00:21:01.620 { 00:21:01.620 "method": "bdev_raid_set_options", 00:21:01.620 "params": { 00:21:01.620 "process_window_size_kb": 1024 00:21:01.620 } 00:21:01.620 }, 00:21:01.620 { 00:21:01.620 "method": "bdev_iscsi_set_options", 00:21:01.620 "params": { 00:21:01.620 "timeout_sec": 30 00:21:01.620 } 00:21:01.620 }, 00:21:01.620 { 00:21:01.620 "method": "bdev_nvme_set_options", 00:21:01.620 "params": { 00:21:01.620 "action_on_timeout": "none", 00:21:01.620 "timeout_us": 0, 00:21:01.620 "timeout_admin_us": 0, 00:21:01.620 "keep_alive_timeout_ms": 10000, 00:21:01.620 "arbitration_burst": 0, 00:21:01.620 "low_priority_weight": 0, 00:21:01.620 "medium_priority_weight": 0, 00:21:01.620 "high_priority_weight": 0, 00:21:01.620 "nvme_adminq_poll_period_us": 10000, 00:21:01.620 "nvme_ioq_poll_period_us": 0, 00:21:01.620 "io_queue_requests": 512, 00:21:01.620 "delay_cmd_submit": true, 00:21:01.620 "transport_retry_count": 4, 00:21:01.620 "bdev_retry_count": 3, 00:21:01.620 "transport_ack_timeout": 0, 00:21:01.620 "ctrlr_loss_timeout_sec": 0, 00:21:01.620 "reconnect_delay_sec": 0, 00:21:01.620 "fast_io_fail_timeout_sec": 0, 00:21:01.620 "disable_auto_failback": false, 00:21:01.620 "generate_uuids": false, 00:21:01.620 "transport_tos": 0, 00:21:01.620 "nvme_error_stat": false, 00:21:01.620 "rdma_srq_size": 0, 00:21:01.620 "io_path_stat": false, 00:21:01.620 "allow_accel_sequence": false, 00:21:01.620 "rdma_max_cq_size": 0, 00:21:01.620 "rdma_cm_event_timeout_ms": 0, 00:21:01.620 "dhchap_digests": [ 00:21:01.620 "sha256", 00:21:01.620 "sha384", 00:21:01.620 "sha512" 00:21:01.620 ], 00:21:01.620 "dhchap_dhgroups": [ 00:21:01.620 "null", 00:21:01.620 "ffdhe2048", 00:21:01.620 "ffdhe3072", 00:21:01.620 "ffdhe4096", 00:21:01.620 "ffdhe6144", 00:21:01.620 "ffdhe8192" 00:21:01.620 ] 00:21:01.620 } 00:21:01.620 }, 00:21:01.620 { 00:21:01.620 "method": "bdev_nvme_attach_controller", 00:21:01.620 "params": { 00:21:01.620 "name": "nvme0", 00:21:01.620 "trtype": "TCP", 00:21:01.620 "adrfam": "IPv4", 00:21:01.620 "traddr": "10.0.0.2", 00:21:01.620 "trsvcid": "4420", 00:21:01.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.620 "prchk_reftag": false, 00:21:01.620 "prchk_guard": false, 00:21:01.620 "ctrlr_loss_timeout_sec": 0, 00:21:01.620 "reconnect_delay_sec": 0, 00:21:01.620 "fast_io_fail_timeout_sec": 0, 00:21:01.620 "psk": "key0", 00:21:01.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.620 "hdgst": false, 00:21:01.620 "ddgst": false 00:21:01.620 } 00:21:01.620 }, 00:21:01.620 { 00:21:01.620 "method": "bdev_nvme_set_hotplug", 00:21:01.621 "params": { 00:21:01.621 "period_us": 100000, 00:21:01.621 "enable": false 00:21:01.621 } 00:21:01.621 }, 00:21:01.621 { 00:21:01.621 "method": "bdev_enable_histogram", 00:21:01.621 "params": { 00:21:01.621 "name": "nvme0n1", 00:21:01.621 "enable": true 00:21:01.621 } 00:21:01.621 }, 00:21:01.621 { 00:21:01.621 "method": "bdev_wait_for_examine" 00:21:01.621 } 00:21:01.621 ] 00:21:01.621 }, 00:21:01.621 { 00:21:01.621 "subsystem": "nbd", 00:21:01.621 "config": [] 00:21:01.621 } 00:21:01.621 ] 00:21:01.621 }' 00:21:01.621 21:26:16 -- target/tls.sh@270 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:01.879 [2024-04-24 21:26:16.586312] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:21:01.879 [2024-04-24 21:26:16.586454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262169 ] 00:21:01.879 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.879 [2024-04-24 21:26:16.707720] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.879 [2024-04-24 21:26:16.802225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.136 [2024-04-24 21:26:17.015747] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.394 21:26:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:02.394 21:26:17 -- common/autotest_common.sh@850 -- # return 0 00:21:02.394 21:26:17 -- target/tls.sh@275 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:02.394 21:26:17 -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:02.654 21:26:17 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.654 21:26:17 -- target/tls.sh@276 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:02.654 Running I/O for 1 seconds... 00:21:03.595 00:21:03.595 Latency(us) 00:21:03.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.595 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:03.595 Verification LBA range: start 0x0 length 0x2000 00:21:03.595 nvme0n1 : 1.01 5118.77 20.00 0.00 0.00 24820.03 4863.46 44978.39 00:21:03.595 =================================================================================================================== 00:21:03.595 Total : 5118.77 20.00 0.00 0.00 24820.03 4863.46 44978.39 00:21:03.595 0 00:21:03.595 21:26:18 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:03.595 21:26:18 -- target/tls.sh@279 -- # cleanup 00:21:03.595 21:26:18 -- target/tls.sh@15 -- # process_shm --id 0 00:21:03.595 21:26:18 -- common/autotest_common.sh@794 -- # type=--id 00:21:03.595 21:26:18 -- common/autotest_common.sh@795 -- # id=0 00:21:03.595 21:26:18 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:03.595 21:26:18 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:03.595 21:26:18 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:03.595 21:26:18 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:03.595 21:26:18 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:03.595 21:26:18 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:03.595 nvmf_trace.0 00:21:03.853 21:26:18 -- common/autotest_common.sh@809 -- # return 0 00:21:03.853 21:26:18 -- target/tls.sh@16 -- # killprocess 1262169 00:21:03.853 21:26:18 -- common/autotest_common.sh@936 -- # '[' -z 1262169 ']' 00:21:03.853 21:26:18 -- common/autotest_common.sh@940 -- # kill -0 1262169 00:21:03.853 21:26:18 -- common/autotest_common.sh@941 -- # uname 00:21:03.853 21:26:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:03.853 21:26:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1262169 00:21:03.853 21:26:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:03.853 21:26:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:03.853 21:26:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1262169' 00:21:03.853 killing process with pid 1262169 00:21:03.853 21:26:18 -- common/autotest_common.sh@955 -- # kill 1262169 00:21:03.853 Received shutdown signal, test time was about 1.000000 seconds 00:21:03.853 00:21:03.853 Latency(us) 00:21:03.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.853 =================================================================================================================== 00:21:03.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:03.853 21:26:18 -- common/autotest_common.sh@960 -- # wait 1262169 00:21:04.112 21:26:19 -- target/tls.sh@17 -- # nvmftestfini 00:21:04.112 21:26:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:04.112 21:26:19 -- nvmf/common.sh@117 -- # sync 00:21:04.112 21:26:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:04.112 21:26:19 -- nvmf/common.sh@120 -- # set +e 00:21:04.112 21:26:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:04.112 21:26:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:04.112 rmmod nvme_tcp 00:21:04.112 rmmod nvme_fabrics 00:21:04.112 rmmod nvme_keyring 00:21:04.112 21:26:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:04.372 21:26:19 -- nvmf/common.sh@124 -- # set -e 00:21:04.372 21:26:19 -- nvmf/common.sh@125 -- # return 0 00:21:04.372 21:26:19 -- nvmf/common.sh@478 -- # '[' -n 1261963 ']' 00:21:04.372 21:26:19 -- nvmf/common.sh@479 -- # killprocess 1261963 00:21:04.372 21:26:19 -- common/autotest_common.sh@936 -- # '[' -z 1261963 ']' 00:21:04.372 21:26:19 -- common/autotest_common.sh@940 -- # kill -0 1261963 00:21:04.372 21:26:19 -- common/autotest_common.sh@941 -- # uname 00:21:04.372 21:26:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.372 21:26:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1261963 00:21:04.372 21:26:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:04.372 21:26:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:04.372 21:26:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1261963' 00:21:04.372 killing process with pid 1261963 00:21:04.372 21:26:19 -- common/autotest_common.sh@955 -- # kill 1261963 00:21:04.372 21:26:19 -- common/autotest_common.sh@960 -- # wait 1261963 00:21:04.632 21:26:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:04.632 21:26:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:04.632 21:26:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:04.632 21:26:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.632 21:26:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:04.632 21:26:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.632 21:26:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.632 21:26:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.172 21:26:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:07.172 21:26:21 -- target/tls.sh@18 -- # rm -f /tmp/tmp.fxgZIEfVzF /tmp/tmp.Hwn7JSFT0J /tmp/tmp.hXGadZvObd 00:21:07.172 00:21:07.172 real 1m26.429s 00:21:07.172 user 2m16.554s 00:21:07.172 sys 0m22.567s 00:21:07.172 21:26:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:07.172 21:26:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.172 ************************************ 00:21:07.172 END TEST nvmf_tls 00:21:07.172 ************************************ 00:21:07.172 21:26:21 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:07.172 21:26:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:07.172 21:26:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:07.172 21:26:21 -- common/autotest_common.sh@10 -- # set +x 00:21:07.172 ************************************ 00:21:07.172 START TEST nvmf_fips 00:21:07.172 ************************************ 00:21:07.172 21:26:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:07.172 * Looking for test storage... 00:21:07.172 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips 00:21:07.172 21:26:21 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.172 21:26:21 -- nvmf/common.sh@7 -- # uname -s 00:21:07.172 21:26:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.172 21:26:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.172 21:26:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.172 21:26:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.172 21:26:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.172 21:26:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.172 21:26:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.172 21:26:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.172 21:26:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.172 21:26:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.172 21:26:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:21:07.172 21:26:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:21:07.172 21:26:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.172 21:26:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.172 21:26:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:07.172 21:26:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.172 21:26:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:07.172 21:26:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.172 21:26:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.172 21:26:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.172 21:26:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.172 21:26:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.172 21:26:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.172 21:26:21 -- paths/export.sh@5 -- # export PATH 00:21:07.172 21:26:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.172 21:26:21 -- nvmf/common.sh@47 -- # : 0 00:21:07.172 21:26:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:07.172 21:26:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:07.172 21:26:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.172 21:26:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.172 21:26:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.172 21:26:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:07.172 21:26:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:07.172 21:26:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:07.172 21:26:21 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:07.172 21:26:21 -- fips/fips.sh@89 -- # check_openssl_version 00:21:07.172 21:26:21 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:07.172 21:26:21 -- fips/fips.sh@85 -- # openssl version 00:21:07.172 21:26:21 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:07.172 21:26:21 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:07.172 21:26:21 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:07.172 21:26:21 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:07.172 21:26:21 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:07.172 21:26:21 -- scripts/common.sh@333 -- # IFS=.-: 00:21:07.172 21:26:21 -- scripts/common.sh@333 -- # read -ra ver1 00:21:07.172 21:26:21 -- scripts/common.sh@334 -- # IFS=.-: 00:21:07.172 21:26:21 -- scripts/common.sh@334 -- # read -ra ver2 00:21:07.172 21:26:21 -- scripts/common.sh@335 -- # local 'op=>=' 00:21:07.172 21:26:21 -- scripts/common.sh@337 -- # ver1_l=3 00:21:07.172 21:26:21 -- scripts/common.sh@338 -- # ver2_l=3 00:21:07.172 21:26:21 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:07.172 21:26:21 -- scripts/common.sh@341 -- # case "$op" in 00:21:07.172 21:26:21 -- scripts/common.sh@345 -- # : 1 00:21:07.172 21:26:21 -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:07.172 21:26:21 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.172 21:26:21 -- scripts/common.sh@362 -- # decimal 3 00:21:07.172 21:26:21 -- scripts/common.sh@350 -- # local d=3 00:21:07.172 21:26:21 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:07.172 21:26:21 -- scripts/common.sh@352 -- # echo 3 00:21:07.172 21:26:21 -- scripts/common.sh@362 -- # ver1[v]=3 00:21:07.172 21:26:21 -- scripts/common.sh@363 -- # decimal 3 00:21:07.172 21:26:21 -- scripts/common.sh@350 -- # local d=3 00:21:07.172 21:26:21 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:07.172 21:26:21 -- scripts/common.sh@352 -- # echo 3 00:21:07.172 21:26:21 -- scripts/common.sh@363 -- # ver2[v]=3 00:21:07.172 21:26:21 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:07.172 21:26:21 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:07.172 21:26:21 -- scripts/common.sh@361 -- # (( v++ )) 00:21:07.172 21:26:21 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.172 21:26:21 -- scripts/common.sh@362 -- # decimal 0 00:21:07.172 21:26:21 -- scripts/common.sh@350 -- # local d=0 00:21:07.172 21:26:21 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:07.172 21:26:21 -- scripts/common.sh@352 -- # echo 0 00:21:07.172 21:26:21 -- scripts/common.sh@362 -- # ver1[v]=0 00:21:07.172 21:26:21 -- scripts/common.sh@363 -- # decimal 0 00:21:07.172 21:26:21 -- scripts/common.sh@350 -- # local d=0 00:21:07.172 21:26:21 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:07.172 21:26:21 -- scripts/common.sh@352 -- # echo 0 00:21:07.172 21:26:21 -- scripts/common.sh@363 -- # ver2[v]=0 00:21:07.172 21:26:21 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:07.172 21:26:21 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:07.172 21:26:21 -- scripts/common.sh@361 -- # (( v++ )) 00:21:07.172 21:26:21 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.172 21:26:21 -- scripts/common.sh@362 -- # decimal 9 00:21:07.172 21:26:21 -- scripts/common.sh@350 -- # local d=9 00:21:07.172 21:26:21 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:07.172 21:26:21 -- scripts/common.sh@352 -- # echo 9 00:21:07.172 21:26:21 -- scripts/common.sh@362 -- # ver1[v]=9 00:21:07.172 21:26:21 -- scripts/common.sh@363 -- # decimal 0 00:21:07.172 21:26:21 -- scripts/common.sh@350 -- # local d=0 00:21:07.172 21:26:21 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:07.173 21:26:21 -- scripts/common.sh@352 -- # echo 0 00:21:07.173 21:26:21 -- scripts/common.sh@363 -- # ver2[v]=0 00:21:07.173 21:26:21 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:07.173 21:26:21 -- scripts/common.sh@364 -- # return 0 00:21:07.173 21:26:21 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:07.173 21:26:21 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:07.173 21:26:21 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:07.173 21:26:21 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:07.173 21:26:21 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:07.173 21:26:21 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:07.173 21:26:21 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:07.173 21:26:21 -- fips/fips.sh@113 -- # build_openssl_config 00:21:07.173 21:26:21 -- fips/fips.sh@37 -- # cat 00:21:07.173 21:26:21 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:07.173 21:26:21 -- fips/fips.sh@58 -- # cat - 00:21:07.173 21:26:21 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:07.173 21:26:21 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:07.173 21:26:21 -- fips/fips.sh@116 -- # mapfile -t providers 00:21:07.173 21:26:21 -- fips/fips.sh@116 -- # openssl list -providers 00:21:07.173 21:26:21 -- fips/fips.sh@116 -- # grep name 00:21:07.173 21:26:22 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:07.173 21:26:22 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:07.173 21:26:22 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:07.173 21:26:22 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:07.173 21:26:22 -- common/autotest_common.sh@638 -- # local es=0 00:21:07.173 21:26:22 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:07.173 21:26:22 -- common/autotest_common.sh@626 -- # local arg=openssl 00:21:07.173 21:26:22 -- fips/fips.sh@127 -- # : 00:21:07.173 21:26:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:07.173 21:26:22 -- common/autotest_common.sh@630 -- # type -t openssl 00:21:07.173 21:26:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:07.173 21:26:22 -- common/autotest_common.sh@632 -- # type -P openssl 00:21:07.173 21:26:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:07.173 21:26:22 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:21:07.173 21:26:22 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:21:07.173 21:26:22 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:21:07.173 Error setting digest 00:21:07.173 0072186D8B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:07.173 0072186D8B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:07.173 21:26:22 -- common/autotest_common.sh@641 -- # es=1 00:21:07.173 21:26:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:07.173 21:26:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:07.173 21:26:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:07.173 21:26:22 -- fips/fips.sh@130 -- # nvmftestinit 00:21:07.173 21:26:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:07.173 21:26:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.173 21:26:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:07.173 21:26:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:07.173 21:26:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:07.173 21:26:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.173 21:26:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.173 21:26:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.173 21:26:22 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:21:07.173 21:26:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:07.173 21:26:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.173 21:26:22 -- common/autotest_common.sh@10 -- # set +x 00:21:13.755 21:26:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:13.755 21:26:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:13.755 21:26:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:13.755 21:26:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:13.755 21:26:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:13.755 21:26:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:13.755 21:26:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:13.755 21:26:28 -- nvmf/common.sh@295 -- # net_devs=() 00:21:13.755 21:26:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:13.755 21:26:28 -- nvmf/common.sh@296 -- # e810=() 00:21:13.755 21:26:28 -- nvmf/common.sh@296 -- # local -ga e810 00:21:13.755 21:26:28 -- nvmf/common.sh@297 -- # x722=() 00:21:13.755 21:26:28 -- nvmf/common.sh@297 -- # local -ga x722 00:21:13.755 21:26:28 -- nvmf/common.sh@298 -- # mlx=() 00:21:13.755 21:26:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:13.755 21:26:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.755 21:26:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.755 21:26:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.755 21:26:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.755 21:26:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.755 21:26:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.755 21:26:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.755 21:26:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.755 21:26:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.755 21:26:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.755 21:26:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.755 21:26:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:13.755 21:26:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:13.755 21:26:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.755 21:26:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:13.755 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:13.755 21:26:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.755 21:26:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:13.755 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:13.755 21:26:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:13.755 21:26:28 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.755 21:26:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.755 21:26:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:13.755 21:26:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.755 21:26:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:13.755 Found net devices under 0000:27:00.0: cvl_0_0 00:21:13.755 21:26:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.755 21:26:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.755 21:26:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.755 21:26:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:13.755 21:26:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.755 21:26:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:13.755 Found net devices under 0000:27:00.1: cvl_0_1 00:21:13.755 21:26:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.755 21:26:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:13.755 21:26:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:13.755 21:26:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:13.755 21:26:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:13.755 21:26:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.755 21:26:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.755 21:26:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.755 21:26:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:13.755 21:26:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.755 21:26:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.755 21:26:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:13.755 21:26:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.755 21:26:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.755 21:26:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:13.755 21:26:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:13.755 21:26:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.755 21:26:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.755 21:26:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.755 21:26:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.755 21:26:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:13.755 21:26:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.755 21:26:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.755 21:26:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.755 21:26:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:13.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:21:13.756 00:21:13.756 --- 10.0.0.2 ping statistics --- 00:21:13.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.756 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:21:13.756 21:26:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:21:13.756 00:21:13.756 --- 10.0.0.1 ping statistics --- 00:21:13.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.756 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:21:13.756 21:26:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.756 21:26:28 -- nvmf/common.sh@411 -- # return 0 00:21:13.756 21:26:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:13.756 21:26:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.756 21:26:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:13.756 21:26:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:13.756 21:26:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.756 21:26:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:13.756 21:26:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:13.756 21:26:28 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:13.756 21:26:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:13.756 21:26:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:13.756 21:26:28 -- common/autotest_common.sh@10 -- # set +x 00:21:13.756 21:26:28 -- nvmf/common.sh@470 -- # nvmfpid=1266995 00:21:13.756 21:26:28 -- nvmf/common.sh@471 -- # waitforlisten 1266995 00:21:13.756 21:26:28 -- common/autotest_common.sh@817 -- # '[' -z 1266995 ']' 00:21:13.756 21:26:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.756 21:26:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:13.756 21:26:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.756 21:26:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:13.756 21:26:28 -- common/autotest_common.sh@10 -- # set +x 00:21:13.756 21:26:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:14.017 [2024-04-24 21:26:28.764109] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:21:14.017 [2024-04-24 21:26:28.764254] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.017 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.017 [2024-04-24 21:26:28.902758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.278 [2024-04-24 21:26:28.995686] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.278 [2024-04-24 21:26:28.995735] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.278 [2024-04-24 21:26:28.995745] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.278 [2024-04-24 21:26:28.995755] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.278 [2024-04-24 21:26:28.995762] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.278 [2024-04-24 21:26:28.995800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.539 21:26:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:14.539 21:26:29 -- common/autotest_common.sh@850 -- # return 0 00:21:14.539 21:26:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:14.539 21:26:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:14.539 21:26:29 -- common/autotest_common.sh@10 -- # set +x 00:21:14.539 21:26:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.539 21:26:29 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:14.539 21:26:29 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:14.539 21:26:29 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:14.539 21:26:29 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:14.539 21:26:29 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:14.539 21:26:29 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:14.539 21:26:29 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:14.539 21:26:29 -- fips/fips.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:14.803 [2024-04-24 21:26:29.620632] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.803 [2024-04-24 21:26:29.636575] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.803 [2024-04-24 21:26:29.636818] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.803 [2024-04-24 21:26:29.683815] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:14.803 malloc0 00:21:14.803 21:26:29 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:14.803 21:26:29 -- fips/fips.sh@147 -- # bdevperf_pid=1267301 00:21:14.803 21:26:29 -- fips/fips.sh@148 -- # waitforlisten 1267301 /var/tmp/bdevperf.sock 00:21:14.803 21:26:29 -- common/autotest_common.sh@817 -- # '[' -z 1267301 ']' 00:21:14.803 21:26:29 -- fips/fips.sh@145 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:14.803 21:26:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.803 21:26:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:14.803 21:26:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.803 21:26:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:14.803 21:26:29 -- common/autotest_common.sh@10 -- # set +x 00:21:15.064 [2024-04-24 21:26:29.823974] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:21:15.064 [2024-04-24 21:26:29.824249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1267301 ] 00:21:15.064 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.064 [2024-04-24 21:26:29.955866] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.325 [2024-04-24 21:26:30.063718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.586 21:26:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:15.586 21:26:30 -- common/autotest_common.sh@850 -- # return 0 00:21:15.586 21:26:30 -- fips/fips.sh@150 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:15.847 [2024-04-24 21:26:30.638409] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.847 [2024-04-24 21:26:30.638545] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:15.847 TLSTESTn1 00:21:15.847 21:26:30 -- fips/fips.sh@154 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:15.847 Running I/O for 10 seconds... 00:21:28.072 00:21:28.072 Latency(us) 00:21:28.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.072 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:28.072 Verification LBA range: start 0x0 length 0x2000 00:21:28.072 TLSTESTn1 : 10.01 6064.54 23.69 0.00 0.00 21074.40 6381.14 58499.50 00:21:28.072 =================================================================================================================== 00:21:28.072 Total : 6064.54 23.69 0.00 0.00 21074.40 6381.14 58499.50 00:21:28.072 0 00:21:28.072 21:26:40 -- fips/fips.sh@1 -- # cleanup 00:21:28.072 21:26:40 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:28.072 21:26:40 -- common/autotest_common.sh@794 -- # type=--id 00:21:28.072 21:26:40 -- common/autotest_common.sh@795 -- # id=0 00:21:28.072 21:26:40 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:28.072 21:26:40 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:28.072 21:26:40 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:28.072 21:26:40 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:28.072 21:26:40 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:28.072 21:26:40 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:28.072 nvmf_trace.0 00:21:28.072 21:26:40 -- common/autotest_common.sh@809 -- # return 0 00:21:28.072 21:26:40 -- fips/fips.sh@16 -- # killprocess 1267301 00:21:28.072 21:26:40 -- common/autotest_common.sh@936 -- # '[' -z 1267301 ']' 00:21:28.072 21:26:40 -- common/autotest_common.sh@940 -- # kill -0 1267301 00:21:28.072 21:26:40 -- common/autotest_common.sh@941 -- # uname 00:21:28.072 21:26:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:28.072 21:26:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1267301 00:21:28.072 21:26:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:28.072 21:26:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:28.072 21:26:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1267301' 00:21:28.072 killing process with pid 1267301 00:21:28.072 21:26:40 -- common/autotest_common.sh@955 -- # kill 1267301 00:21:28.072 Received shutdown signal, test time was about 10.000000 seconds 00:21:28.072 00:21:28.072 Latency(us) 00:21:28.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.072 =================================================================================================================== 00:21:28.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:28.072 [2024-04-24 21:26:40.934031] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:28.073 21:26:40 -- common/autotest_common.sh@960 -- # wait 1267301 00:21:28.073 21:26:41 -- fips/fips.sh@17 -- # nvmftestfini 00:21:28.073 21:26:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:28.073 21:26:41 -- nvmf/common.sh@117 -- # sync 00:21:28.073 21:26:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.073 21:26:41 -- nvmf/common.sh@120 -- # set +e 00:21:28.073 21:26:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.073 21:26:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.073 rmmod nvme_tcp 00:21:28.073 rmmod nvme_fabrics 00:21:28.073 rmmod nvme_keyring 00:21:28.073 21:26:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.073 21:26:41 -- nvmf/common.sh@124 -- # set -e 00:21:28.073 21:26:41 -- nvmf/common.sh@125 -- # return 0 00:21:28.073 21:26:41 -- nvmf/common.sh@478 -- # '[' -n 1266995 ']' 00:21:28.073 21:26:41 -- nvmf/common.sh@479 -- # killprocess 1266995 00:21:28.073 21:26:41 -- common/autotest_common.sh@936 -- # '[' -z 1266995 ']' 00:21:28.073 21:26:41 -- common/autotest_common.sh@940 -- # kill -0 1266995 00:21:28.073 21:26:41 -- common/autotest_common.sh@941 -- # uname 00:21:28.073 21:26:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:28.073 21:26:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1266995 00:21:28.073 21:26:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:28.073 21:26:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:28.073 21:26:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1266995' 00:21:28.073 killing process with pid 1266995 00:21:28.073 21:26:41 -- common/autotest_common.sh@955 -- # kill 1266995 00:21:28.073 [2024-04-24 21:26:41.441434] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:28.073 21:26:41 -- common/autotest_common.sh@960 -- # wait 1266995 00:21:28.073 21:26:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:28.073 21:26:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:28.073 21:26:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:28.073 21:26:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.073 21:26:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:28.073 21:26:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.073 21:26:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.073 21:26:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.583 21:26:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:29.583 21:26:44 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:29.583 00:21:29.583 real 0m22.248s 00:21:29.583 user 0m24.898s 00:21:29.583 sys 0m8.037s 00:21:29.583 21:26:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:29.583 21:26:44 -- common/autotest_common.sh@10 -- # set +x 00:21:29.583 ************************************ 00:21:29.583 END TEST nvmf_fips 00:21:29.583 ************************************ 00:21:29.583 21:26:44 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:21:29.583 21:26:44 -- nvmf/nvmf.sh@70 -- # [[ phy-fallback == phy ]] 00:21:29.583 21:26:44 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:21:29.583 21:26:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:29.583 21:26:44 -- common/autotest_common.sh@10 -- # set +x 00:21:29.583 21:26:44 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:21:29.583 21:26:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:29.583 21:26:44 -- common/autotest_common.sh@10 -- # set +x 00:21:29.583 21:26:44 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:21:29.583 21:26:44 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:29.583 21:26:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:29.583 21:26:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:29.583 21:26:44 -- common/autotest_common.sh@10 -- # set +x 00:21:29.583 ************************************ 00:21:29.583 START TEST nvmf_multicontroller 00:21:29.583 ************************************ 00:21:29.583 21:26:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:29.583 * Looking for test storage... 00:21:29.583 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:21:29.583 21:26:44 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:29.583 21:26:44 -- nvmf/common.sh@7 -- # uname -s 00:21:29.583 21:26:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:29.583 21:26:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:29.583 21:26:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:29.583 21:26:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:29.583 21:26:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:29.583 21:26:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:29.583 21:26:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:29.583 21:26:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:29.583 21:26:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:29.583 21:26:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:29.583 21:26:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:21:29.583 21:26:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:21:29.583 21:26:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:29.583 21:26:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:29.583 21:26:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:29.583 21:26:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:29.583 21:26:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:29.583 21:26:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.583 21:26:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.583 21:26:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.583 21:26:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.583 21:26:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.583 21:26:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.583 21:26:44 -- paths/export.sh@5 -- # export PATH 00:21:29.583 21:26:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.583 21:26:44 -- nvmf/common.sh@47 -- # : 0 00:21:29.583 21:26:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:29.583 21:26:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:29.583 21:26:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:29.583 21:26:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:29.583 21:26:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:29.583 21:26:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:29.583 21:26:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:29.583 21:26:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:29.583 21:26:44 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:29.583 21:26:44 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:29.583 21:26:44 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:29.583 21:26:44 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:29.583 21:26:44 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:29.583 21:26:44 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:29.583 21:26:44 -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:29.583 21:26:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:29.583 21:26:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:29.583 21:26:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:29.583 21:26:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:29.583 21:26:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:29.583 21:26:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.583 21:26:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.583 21:26:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.583 21:26:44 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:21:29.583 21:26:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:29.583 21:26:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:29.583 21:26:44 -- common/autotest_common.sh@10 -- # set +x 00:21:34.857 21:26:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:34.857 21:26:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:34.857 21:26:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:34.857 21:26:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:34.857 21:26:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:34.857 21:26:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:34.857 21:26:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:34.857 21:26:49 -- nvmf/common.sh@295 -- # net_devs=() 00:21:34.857 21:26:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:34.857 21:26:49 -- nvmf/common.sh@296 -- # e810=() 00:21:34.857 21:26:49 -- nvmf/common.sh@296 -- # local -ga e810 00:21:34.857 21:26:49 -- nvmf/common.sh@297 -- # x722=() 00:21:34.857 21:26:49 -- nvmf/common.sh@297 -- # local -ga x722 00:21:34.857 21:26:49 -- nvmf/common.sh@298 -- # mlx=() 00:21:34.857 21:26:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:34.857 21:26:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.857 21:26:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.857 21:26:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.857 21:26:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.857 21:26:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.857 21:26:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.857 21:26:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.857 21:26:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.857 21:26:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.857 21:26:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.857 21:26:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.857 21:26:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:34.857 21:26:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:34.857 21:26:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.857 21:26:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:34.857 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:34.857 21:26:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.857 21:26:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:34.857 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:34.857 21:26:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:34.857 21:26:49 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:34.857 21:26:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.857 21:26:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.857 21:26:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:34.857 21:26:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.857 21:26:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:34.857 Found net devices under 0000:27:00.0: cvl_0_0 00:21:34.857 21:26:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.857 21:26:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.857 21:26:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.857 21:26:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:34.857 21:26:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.857 21:26:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:34.858 Found net devices under 0000:27:00.1: cvl_0_1 00:21:34.858 21:26:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.858 21:26:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:34.858 21:26:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:34.858 21:26:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:34.858 21:26:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:34.858 21:26:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:34.858 21:26:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.858 21:26:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.858 21:26:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.858 21:26:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:34.858 21:26:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.858 21:26:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.858 21:26:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:34.858 21:26:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.858 21:26:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.858 21:26:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:34.858 21:26:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:34.858 21:26:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.858 21:26:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.858 21:26:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.858 21:26:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.858 21:26:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:34.858 21:26:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.858 21:26:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.858 21:26:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.858 21:26:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:34.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:21:34.858 00:21:34.858 --- 10.0.0.2 ping statistics --- 00:21:34.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.858 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:21:34.858 21:26:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:21:34.858 00:21:34.858 --- 10.0.0.1 ping statistics --- 00:21:34.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.858 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:34.858 21:26:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.858 21:26:49 -- nvmf/common.sh@411 -- # return 0 00:21:34.858 21:26:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:34.858 21:26:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.858 21:26:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:34.858 21:26:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:34.858 21:26:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.858 21:26:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:34.858 21:26:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:34.858 21:26:49 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:34.858 21:26:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:34.858 21:26:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:34.858 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:21:34.858 21:26:49 -- nvmf/common.sh@470 -- # nvmfpid=1273487 00:21:34.858 21:26:49 -- nvmf/common.sh@471 -- # waitforlisten 1273487 00:21:34.858 21:26:49 -- common/autotest_common.sh@817 -- # '[' -z 1273487 ']' 00:21:34.858 21:26:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.858 21:26:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:34.858 21:26:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.858 21:26:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:34.858 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:21:34.858 21:26:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:34.858 [2024-04-24 21:26:49.723221] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:21:34.858 [2024-04-24 21:26:49.723335] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.858 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.117 [2024-04-24 21:26:49.842845] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:35.117 [2024-04-24 21:26:49.939136] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.117 [2024-04-24 21:26:49.939171] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.117 [2024-04-24 21:26:49.939181] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.117 [2024-04-24 21:26:49.939190] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.117 [2024-04-24 21:26:49.939197] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.117 [2024-04-24 21:26:49.939349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.117 [2024-04-24 21:26:49.939382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.117 [2024-04-24 21:26:49.939392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.686 21:26:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:35.686 21:26:50 -- common/autotest_common.sh@850 -- # return 0 00:21:35.686 21:26:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:35.686 21:26:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:35.686 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 21:26:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.686 21:26:50 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.686 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.686 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 [2024-04-24 21:26:50.451701] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.686 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.686 21:26:50 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:35.686 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.686 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 Malloc0 00:21:35.686 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.686 21:26:50 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:35.686 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.686 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.686 21:26:50 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:35.686 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.686 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.686 21:26:50 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.686 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.686 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 [2024-04-24 21:26:50.528100] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.686 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.686 21:26:50 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:35.686 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.686 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 [2024-04-24 21:26:50.536055] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:35.686 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.686 21:26:50 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:35.686 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.686 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 Malloc1 00:21:35.686 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.686 21:26:50 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:35.686 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.686 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.686 21:26:50 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:35.686 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.686 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.686 21:26:50 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:35.686 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.686 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.686 21:26:50 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:35.686 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.686 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.686 21:26:50 -- host/multicontroller.sh@44 -- # bdevperf_pid=1273648 00:21:35.686 21:26:50 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.686 21:26:50 -- host/multicontroller.sh@47 -- # waitforlisten 1273648 /var/tmp/bdevperf.sock 00:21:35.686 21:26:50 -- common/autotest_common.sh@817 -- # '[' -z 1273648 ']' 00:21:35.686 21:26:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.686 21:26:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:35.686 21:26:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.686 21:26:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:35.686 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:21:35.686 21:26:50 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:36.628 21:26:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:36.628 21:26:51 -- common/autotest_common.sh@850 -- # return 0 00:21:36.628 21:26:51 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:36.628 21:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.628 21:26:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.628 NVMe0n1 00:21:36.628 21:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.628 21:26:51 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:36.628 21:26:51 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.628 21:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.628 21:26:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.628 21:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.628 1 00:21:36.628 21:26:51 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:36.628 21:26:51 -- common/autotest_common.sh@638 -- # local es=0 00:21:36.628 21:26:51 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:36.628 21:26:51 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:36.628 21:26:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:36.628 21:26:51 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:36.628 21:26:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:36.628 21:26:51 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:36.628 21:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.628 21:26:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.628 request: 00:21:36.628 { 00:21:36.628 "name": "NVMe0", 00:21:36.628 "trtype": "tcp", 00:21:36.628 "traddr": "10.0.0.2", 00:21:36.628 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:36.628 "hostaddr": "10.0.0.2", 00:21:36.628 "hostsvcid": "60000", 00:21:36.628 "adrfam": "ipv4", 00:21:36.628 "trsvcid": "4420", 00:21:36.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.628 "method": "bdev_nvme_attach_controller", 00:21:36.628 "req_id": 1 00:21:36.628 } 00:21:36.628 Got JSON-RPC error response 00:21:36.628 response: 00:21:36.628 { 00:21:36.628 "code": -114, 00:21:36.628 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:36.628 } 00:21:36.628 21:26:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:36.628 21:26:51 -- common/autotest_common.sh@641 -- # es=1 00:21:36.628 21:26:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:36.628 21:26:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:36.628 21:26:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:36.628 21:26:51 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:36.628 21:26:51 -- common/autotest_common.sh@638 -- # local es=0 00:21:36.628 21:26:51 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:36.628 21:26:51 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:36.628 21:26:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:36.628 21:26:51 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:36.628 21:26:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:36.628 21:26:51 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:36.628 21:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.628 21:26:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.628 request: 00:21:36.628 { 00:21:36.628 "name": "NVMe0", 00:21:36.628 "trtype": "tcp", 00:21:36.628 "traddr": "10.0.0.2", 00:21:36.628 "hostaddr": "10.0.0.2", 00:21:36.628 "hostsvcid": "60000", 00:21:36.628 "adrfam": "ipv4", 00:21:36.628 "trsvcid": "4420", 00:21:36.628 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:36.628 "method": "bdev_nvme_attach_controller", 00:21:36.628 "req_id": 1 00:21:36.628 } 00:21:36.628 Got JSON-RPC error response 00:21:36.628 response: 00:21:36.628 { 00:21:36.628 "code": -114, 00:21:36.628 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:36.628 } 00:21:36.628 21:26:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:36.628 21:26:51 -- common/autotest_common.sh@641 -- # es=1 00:21:36.628 21:26:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:36.628 21:26:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:36.628 21:26:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:36.628 21:26:51 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:36.628 21:26:51 -- common/autotest_common.sh@638 -- # local es=0 00:21:36.628 21:26:51 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:36.628 21:26:51 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:36.628 21:26:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:36.628 21:26:51 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:36.628 21:26:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:36.628 21:26:51 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:36.628 21:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.628 21:26:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.628 request: 00:21:36.628 { 00:21:36.628 "name": "NVMe0", 00:21:36.628 "trtype": "tcp", 00:21:36.628 "traddr": "10.0.0.2", 00:21:36.628 "hostaddr": "10.0.0.2", 00:21:36.628 "hostsvcid": "60000", 00:21:36.628 "adrfam": "ipv4", 00:21:36.628 "trsvcid": "4420", 00:21:36.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.628 "multipath": "disable", 00:21:36.628 "method": "bdev_nvme_attach_controller", 00:21:36.628 "req_id": 1 00:21:36.628 } 00:21:36.628 Got JSON-RPC error response 00:21:36.628 response: 00:21:36.628 { 00:21:36.628 "code": -114, 00:21:36.628 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:36.628 } 00:21:36.628 21:26:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:36.628 21:26:51 -- common/autotest_common.sh@641 -- # es=1 00:21:36.628 21:26:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:36.628 21:26:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:36.628 21:26:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:36.628 21:26:51 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:36.628 21:26:51 -- common/autotest_common.sh@638 -- # local es=0 00:21:36.628 21:26:51 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:36.628 21:26:51 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:36.628 21:26:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:36.628 21:26:51 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:36.628 21:26:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:36.628 21:26:51 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:36.628 21:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.628 21:26:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.890 request: 00:21:36.890 { 00:21:36.890 "name": "NVMe0", 00:21:36.890 "trtype": "tcp", 00:21:36.890 "traddr": "10.0.0.2", 00:21:36.890 "hostaddr": "10.0.0.2", 00:21:36.890 "hostsvcid": "60000", 00:21:36.890 "adrfam": "ipv4", 00:21:36.890 "trsvcid": "4420", 00:21:36.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.890 "multipath": "failover", 00:21:36.890 "method": "bdev_nvme_attach_controller", 00:21:36.890 "req_id": 1 00:21:36.890 } 00:21:36.890 Got JSON-RPC error response 00:21:36.890 response: 00:21:36.890 { 00:21:36.890 "code": -114, 00:21:36.890 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:36.890 } 00:21:36.890 21:26:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:36.890 21:26:51 -- common/autotest_common.sh@641 -- # es=1 00:21:36.890 21:26:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:36.890 21:26:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:36.890 21:26:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:36.890 21:26:51 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.890 21:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.890 21:26:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.890 00:21:36.890 21:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.890 21:26:51 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.890 21:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.890 21:26:51 -- common/autotest_common.sh@10 -- # set +x 00:21:36.890 21:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.890 21:26:51 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:36.890 21:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.890 21:26:51 -- common/autotest_common.sh@10 -- # set +x 00:21:37.151 00:21:37.151 21:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.151 21:26:51 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.151 21:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.151 21:26:51 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:37.151 21:26:51 -- common/autotest_common.sh@10 -- # set +x 00:21:37.151 21:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.151 21:26:51 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:37.151 21:26:51 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:38.094 0 00:21:38.094 21:26:52 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:38.094 21:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.095 21:26:52 -- common/autotest_common.sh@10 -- # set +x 00:21:38.095 21:26:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.095 21:26:53 -- host/multicontroller.sh@100 -- # killprocess 1273648 00:21:38.095 21:26:53 -- common/autotest_common.sh@936 -- # '[' -z 1273648 ']' 00:21:38.095 21:26:53 -- common/autotest_common.sh@940 -- # kill -0 1273648 00:21:38.095 21:26:53 -- common/autotest_common.sh@941 -- # uname 00:21:38.095 21:26:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:38.095 21:26:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1273648 00:21:38.095 21:26:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:38.095 21:26:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:38.095 21:26:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1273648' 00:21:38.095 killing process with pid 1273648 00:21:38.095 21:26:53 -- common/autotest_common.sh@955 -- # kill 1273648 00:21:38.095 21:26:53 -- common/autotest_common.sh@960 -- # wait 1273648 00:21:38.669 21:26:53 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:38.669 21:26:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.669 21:26:53 -- common/autotest_common.sh@10 -- # set +x 00:21:38.669 21:26:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.669 21:26:53 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:38.669 21:26:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.669 21:26:53 -- common/autotest_common.sh@10 -- # set +x 00:21:38.669 21:26:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.669 21:26:53 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:38.669 21:26:53 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:38.669 21:26:53 -- common/autotest_common.sh@1598 -- # read -r file 00:21:38.669 21:26:53 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:38.669 21:26:53 -- common/autotest_common.sh@1597 -- # sort -u 00:21:38.669 21:26:53 -- common/autotest_common.sh@1599 -- # cat 00:21:38.669 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:38.669 [2024-04-24 21:26:50.677303] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:21:38.669 [2024-04-24 21:26:50.677440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273648 ] 00:21:38.669 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.669 [2024-04-24 21:26:50.804386] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.669 [2024-04-24 21:26:50.894662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.669 [2024-04-24 21:26:51.864834] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 389398aa-3bd9-4df0-baf5-149c284010e5 already exists 00:21:38.669 [2024-04-24 21:26:51.864876] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:389398aa-3bd9-4df0-baf5-149c284010e5 alias for bdev NVMe1n1 00:21:38.669 [2024-04-24 21:26:51.864892] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:38.669 Running I/O for 1 seconds... 00:21:38.669 00:21:38.669 Latency(us) 00:21:38.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.669 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:38.669 NVMe0n1 : 1.00 24853.87 97.09 0.00 0.00 5139.55 2604.19 10692.72 00:21:38.669 =================================================================================================================== 00:21:38.669 Total : 24853.87 97.09 0.00 0.00 5139.55 2604.19 10692.72 00:21:38.669 Received shutdown signal, test time was about 1.000000 seconds 00:21:38.669 00:21:38.669 Latency(us) 00:21:38.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.669 =================================================================================================================== 00:21:38.670 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.670 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:38.670 21:26:53 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:38.670 21:26:53 -- common/autotest_common.sh@1598 -- # read -r file 00:21:38.670 21:26:53 -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:38.670 21:26:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:38.670 21:26:53 -- nvmf/common.sh@117 -- # sync 00:21:38.670 21:26:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:38.670 21:26:53 -- nvmf/common.sh@120 -- # set +e 00:21:38.670 21:26:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:38.670 21:26:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:38.670 rmmod nvme_tcp 00:21:38.670 rmmod nvme_fabrics 00:21:38.670 rmmod nvme_keyring 00:21:38.670 21:26:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:38.670 21:26:53 -- nvmf/common.sh@124 -- # set -e 00:21:38.670 21:26:53 -- nvmf/common.sh@125 -- # return 0 00:21:38.670 21:26:53 -- nvmf/common.sh@478 -- # '[' -n 1273487 ']' 00:21:38.670 21:26:53 -- nvmf/common.sh@479 -- # killprocess 1273487 00:21:38.670 21:26:53 -- common/autotest_common.sh@936 -- # '[' -z 1273487 ']' 00:21:38.670 21:26:53 -- common/autotest_common.sh@940 -- # kill -0 1273487 00:21:38.670 21:26:53 -- common/autotest_common.sh@941 -- # uname 00:21:38.670 21:26:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:38.670 21:26:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1273487 00:21:38.670 21:26:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:38.670 21:26:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:38.670 21:26:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1273487' 00:21:38.670 killing process with pid 1273487 00:21:38.670 21:26:53 -- common/autotest_common.sh@955 -- # kill 1273487 00:21:38.670 21:26:53 -- common/autotest_common.sh@960 -- # wait 1273487 00:21:39.237 21:26:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:39.237 21:26:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:39.237 21:26:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:39.237 21:26:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:39.237 21:26:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:39.237 21:26:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.237 21:26:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.237 21:26:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.776 21:26:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:41.776 00:21:41.776 real 0m12.000s 00:21:41.776 user 0m16.804s 00:21:41.776 sys 0m4.806s 00:21:41.776 21:26:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:41.776 21:26:56 -- common/autotest_common.sh@10 -- # set +x 00:21:41.776 ************************************ 00:21:41.776 END TEST nvmf_multicontroller 00:21:41.776 ************************************ 00:21:41.776 21:26:56 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:41.776 21:26:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:41.776 21:26:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:41.776 21:26:56 -- common/autotest_common.sh@10 -- # set +x 00:21:41.776 ************************************ 00:21:41.776 START TEST nvmf_aer 00:21:41.776 ************************************ 00:21:41.776 21:26:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:41.776 * Looking for test storage... 00:21:41.776 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:21:41.776 21:26:56 -- host/aer.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.776 21:26:56 -- nvmf/common.sh@7 -- # uname -s 00:21:41.776 21:26:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.776 21:26:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.776 21:26:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.776 21:26:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.776 21:26:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.776 21:26:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.776 21:26:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.776 21:26:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.776 21:26:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.776 21:26:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.776 21:26:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:21:41.776 21:26:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:21:41.776 21:26:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.776 21:26:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.776 21:26:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:41.776 21:26:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.776 21:26:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:41.776 21:26:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.776 21:26:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.776 21:26:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.776 21:26:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.776 21:26:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.776 21:26:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.776 21:26:56 -- paths/export.sh@5 -- # export PATH 00:21:41.776 21:26:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.776 21:26:56 -- nvmf/common.sh@47 -- # : 0 00:21:41.776 21:26:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:41.776 21:26:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:41.776 21:26:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.776 21:26:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.776 21:26:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.776 21:26:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:41.776 21:26:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:41.776 21:26:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:41.776 21:26:56 -- host/aer.sh@11 -- # nvmftestinit 00:21:41.776 21:26:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:41.776 21:26:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.776 21:26:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:41.776 21:26:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:41.776 21:26:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:41.776 21:26:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.776 21:26:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.776 21:26:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.776 21:26:56 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:21:41.776 21:26:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:41.776 21:26:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:41.776 21:26:56 -- common/autotest_common.sh@10 -- # set +x 00:21:47.050 21:27:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:47.050 21:27:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:47.050 21:27:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:47.050 21:27:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:47.051 21:27:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:47.051 21:27:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:47.051 21:27:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:47.051 21:27:01 -- nvmf/common.sh@295 -- # net_devs=() 00:21:47.051 21:27:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:47.051 21:27:01 -- nvmf/common.sh@296 -- # e810=() 00:21:47.051 21:27:01 -- nvmf/common.sh@296 -- # local -ga e810 00:21:47.051 21:27:01 -- nvmf/common.sh@297 -- # x722=() 00:21:47.051 21:27:01 -- nvmf/common.sh@297 -- # local -ga x722 00:21:47.051 21:27:01 -- nvmf/common.sh@298 -- # mlx=() 00:21:47.051 21:27:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:47.051 21:27:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.051 21:27:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.051 21:27:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.051 21:27:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.051 21:27:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.051 21:27:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.051 21:27:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.051 21:27:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.051 21:27:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.051 21:27:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.051 21:27:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.051 21:27:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:47.051 21:27:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:47.051 21:27:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.051 21:27:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:47.051 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:47.051 21:27:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.051 21:27:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:47.051 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:47.051 21:27:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:47.051 21:27:01 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.051 21:27:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.051 21:27:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:47.051 21:27:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.051 21:27:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:47.051 Found net devices under 0000:27:00.0: cvl_0_0 00:21:47.051 21:27:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.051 21:27:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.051 21:27:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.051 21:27:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:47.051 21:27:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.051 21:27:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:47.051 Found net devices under 0000:27:00.1: cvl_0_1 00:21:47.051 21:27:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.051 21:27:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:47.051 21:27:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:47.051 21:27:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:47.051 21:27:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.051 21:27:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.051 21:27:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.051 21:27:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:47.051 21:27:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.051 21:27:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.051 21:27:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:47.051 21:27:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.051 21:27:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.051 21:27:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:47.051 21:27:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:47.051 21:27:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.051 21:27:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.051 21:27:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.051 21:27:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.051 21:27:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:47.051 21:27:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.051 21:27:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.051 21:27:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.051 21:27:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:47.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:21:47.051 00:21:47.051 --- 10.0.0.2 ping statistics --- 00:21:47.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.051 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:21:47.051 21:27:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:21:47.051 00:21:47.051 --- 10.0.0.1 ping statistics --- 00:21:47.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.051 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:47.051 21:27:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.051 21:27:01 -- nvmf/common.sh@411 -- # return 0 00:21:47.051 21:27:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:47.051 21:27:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.051 21:27:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:47.051 21:27:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.051 21:27:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:47.051 21:27:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:47.051 21:27:01 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:47.051 21:27:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:47.051 21:27:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:47.051 21:27:01 -- common/autotest_common.sh@10 -- # set +x 00:21:47.051 21:27:01 -- nvmf/common.sh@470 -- # nvmfpid=1278431 00:21:47.051 21:27:01 -- nvmf/common.sh@471 -- # waitforlisten 1278431 00:21:47.051 21:27:01 -- common/autotest_common.sh@817 -- # '[' -z 1278431 ']' 00:21:47.051 21:27:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.051 21:27:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:47.051 21:27:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.051 21:27:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:47.051 21:27:01 -- common/autotest_common.sh@10 -- # set +x 00:21:47.051 21:27:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:47.051 [2024-04-24 21:27:01.968100] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:21:47.051 [2024-04-24 21:27:01.968202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.312 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.312 [2024-04-24 21:27:02.088939] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.312 [2024-04-24 21:27:02.181569] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.312 [2024-04-24 21:27:02.181603] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.312 [2024-04-24 21:27:02.181614] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.312 [2024-04-24 21:27:02.181626] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.312 [2024-04-24 21:27:02.181633] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.312 [2024-04-24 21:27:02.181787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.312 [2024-04-24 21:27:02.181890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.312 [2024-04-24 21:27:02.182004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.312 [2024-04-24 21:27:02.182013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.884 21:27:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:47.884 21:27:02 -- common/autotest_common.sh@850 -- # return 0 00:21:47.884 21:27:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:47.884 21:27:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:47.884 21:27:02 -- common/autotest_common.sh@10 -- # set +x 00:21:47.884 21:27:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.884 21:27:02 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:47.884 21:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.884 21:27:02 -- common/autotest_common.sh@10 -- # set +x 00:21:47.884 [2024-04-24 21:27:02.736686] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.884 21:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.884 21:27:02 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:47.884 21:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.884 21:27:02 -- common/autotest_common.sh@10 -- # set +x 00:21:47.884 Malloc0 00:21:47.884 21:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.884 21:27:02 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:47.884 21:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.884 21:27:02 -- common/autotest_common.sh@10 -- # set +x 00:21:47.884 21:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.884 21:27:02 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:47.884 21:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.884 21:27:02 -- common/autotest_common.sh@10 -- # set +x 00:21:47.884 21:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.884 21:27:02 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.884 21:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.884 21:27:02 -- common/autotest_common.sh@10 -- # set +x 00:21:47.884 [2024-04-24 21:27:02.805197] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.884 21:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.884 21:27:02 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:47.884 21:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.884 21:27:02 -- common/autotest_common.sh@10 -- # set +x 00:21:47.884 [2024-04-24 21:27:02.812942] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:47.884 [ 00:21:47.884 { 00:21:47.884 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:47.884 "subtype": "Discovery", 00:21:47.884 "listen_addresses": [], 00:21:47.884 "allow_any_host": true, 00:21:47.884 "hosts": [] 00:21:47.884 }, 00:21:47.884 { 00:21:47.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.884 "subtype": "NVMe", 00:21:47.884 "listen_addresses": [ 00:21:47.884 { 00:21:47.884 "transport": "TCP", 00:21:47.884 "trtype": "TCP", 00:21:47.884 "adrfam": "IPv4", 00:21:47.884 "traddr": "10.0.0.2", 00:21:47.884 "trsvcid": "4420" 00:21:47.884 } 00:21:47.884 ], 00:21:47.884 "allow_any_host": true, 00:21:47.884 "hosts": [], 00:21:47.884 "serial_number": "SPDK00000000000001", 00:21:47.884 "model_number": "SPDK bdev Controller", 00:21:47.884 "max_namespaces": 2, 00:21:47.884 "min_cntlid": 1, 00:21:47.884 "max_cntlid": 65519, 00:21:47.884 "namespaces": [ 00:21:47.884 { 00:21:47.884 "nsid": 1, 00:21:47.884 "bdev_name": "Malloc0", 00:21:47.884 "name": "Malloc0", 00:21:47.884 "nguid": "AF39A6BF105D403FB3C13A0CBCA504A6", 00:21:47.884 "uuid": "af39a6bf-105d-403f-b3c1-3a0cbca504a6" 00:21:47.884 } 00:21:47.884 ] 00:21:47.884 } 00:21:47.884 ] 00:21:47.884 21:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.884 21:27:02 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:47.884 21:27:02 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:47.884 21:27:02 -- host/aer.sh@33 -- # aerpid=1278591 00:21:47.884 21:27:02 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:47.884 21:27:02 -- common/autotest_common.sh@1251 -- # local i=0 00:21:47.884 21:27:02 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:47.884 21:27:02 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:21:47.884 21:27:02 -- common/autotest_common.sh@1254 -- # i=1 00:21:47.884 21:27:02 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:47.884 21:27:02 -- host/aer.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:48.143 21:27:02 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:48.143 21:27:02 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:21:48.143 21:27:02 -- common/autotest_common.sh@1254 -- # i=2 00:21:48.143 21:27:02 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:48.143 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.143 21:27:03 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:48.143 21:27:03 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:21:48.143 21:27:03 -- common/autotest_common.sh@1254 -- # i=3 00:21:48.143 21:27:03 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:48.401 21:27:03 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:48.401 21:27:03 -- common/autotest_common.sh@1253 -- # '[' 3 -lt 200 ']' 00:21:48.401 21:27:03 -- common/autotest_common.sh@1254 -- # i=4 00:21:48.401 21:27:03 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:48.401 21:27:03 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:48.401 21:27:03 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:48.401 21:27:03 -- common/autotest_common.sh@1262 -- # return 0 00:21:48.401 21:27:03 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:48.401 21:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.401 21:27:03 -- common/autotest_common.sh@10 -- # set +x 00:21:48.401 Malloc1 00:21:48.401 21:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.401 21:27:03 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:48.401 21:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.401 21:27:03 -- common/autotest_common.sh@10 -- # set +x 00:21:48.401 21:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.401 21:27:03 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:48.401 21:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.401 21:27:03 -- common/autotest_common.sh@10 -- # set +x 00:21:48.401 [ 00:21:48.401 { 00:21:48.401 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:48.401 "subtype": "Discovery", 00:21:48.401 "listen_addresses": [], 00:21:48.401 "allow_any_host": true, 00:21:48.401 "hosts": [] 00:21:48.401 }, 00:21:48.401 { 00:21:48.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.401 "subtype": "NVMe", 00:21:48.401 "listen_addresses": [ 00:21:48.401 { 00:21:48.401 "transport": "TCP", 00:21:48.401 "trtype": "TCP", 00:21:48.401 "adrfam": "IPv4", 00:21:48.401 "traddr": "10.0.0.2", 00:21:48.401 "trsvcid": "4420" 00:21:48.401 } 00:21:48.401 ], 00:21:48.401 "allow_any_host": true, 00:21:48.401 "hosts": [], 00:21:48.401 "serial_number": "SPDK00000000000001", 00:21:48.401 "model_number": "SPDK bdev Controller", 00:21:48.401 "max_namespaces": 2, 00:21:48.401 "min_cntlid": 1, 00:21:48.401 "max_cntlid": 65519, 00:21:48.401 "namespaces": [ 00:21:48.401 { 00:21:48.401 "nsid": 1, 00:21:48.401 "bdev_name": "Malloc0", 00:21:48.401 "name": "Malloc0", 00:21:48.401 "nguid": "AF39A6BF105D403FB3C13A0CBCA504A6", 00:21:48.401 "uuid": "af39a6bf-105d-403f-b3c1-3a0cbca504a6" 00:21:48.401 }, 00:21:48.401 { 00:21:48.401 "nsid": 2, 00:21:48.401 "bdev_name": "Malloc1", 00:21:48.401 "name": "Malloc1", 00:21:48.401 "nguid": "AE8F017181B545FB96B02315E04CED81", 00:21:48.401 "uuid": "ae8f0171-81b5-45fb-96b0-2315e04ced81" 00:21:48.401 } 00:21:48.401 ] 00:21:48.401 } 00:21:48.401 ] 00:21:48.401 21:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.401 21:27:03 -- host/aer.sh@43 -- # wait 1278591 00:21:48.401 Asynchronous Event Request test 00:21:48.401 Attaching to 10.0.0.2 00:21:48.401 Attached to 10.0.0.2 00:21:48.401 Registering asynchronous event callbacks... 00:21:48.401 Starting namespace attribute notice tests for all controllers... 00:21:48.401 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:48.401 aer_cb - Changed Namespace 00:21:48.401 Cleaning up... 00:21:48.401 21:27:03 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:48.401 21:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.401 21:27:03 -- common/autotest_common.sh@10 -- # set +x 00:21:48.659 21:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.659 21:27:03 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:48.659 21:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.659 21:27:03 -- common/autotest_common.sh@10 -- # set +x 00:21:48.659 21:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.659 21:27:03 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.659 21:27:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.659 21:27:03 -- common/autotest_common.sh@10 -- # set +x 00:21:48.659 21:27:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.659 21:27:03 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:48.659 21:27:03 -- host/aer.sh@51 -- # nvmftestfini 00:21:48.659 21:27:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:48.659 21:27:03 -- nvmf/common.sh@117 -- # sync 00:21:48.659 21:27:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:48.659 21:27:03 -- nvmf/common.sh@120 -- # set +e 00:21:48.659 21:27:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.659 21:27:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:48.659 rmmod nvme_tcp 00:21:48.659 rmmod nvme_fabrics 00:21:48.659 rmmod nvme_keyring 00:21:48.659 21:27:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.659 21:27:03 -- nvmf/common.sh@124 -- # set -e 00:21:48.659 21:27:03 -- nvmf/common.sh@125 -- # return 0 00:21:48.659 21:27:03 -- nvmf/common.sh@478 -- # '[' -n 1278431 ']' 00:21:48.659 21:27:03 -- nvmf/common.sh@479 -- # killprocess 1278431 00:21:48.659 21:27:03 -- common/autotest_common.sh@936 -- # '[' -z 1278431 ']' 00:21:48.659 21:27:03 -- common/autotest_common.sh@940 -- # kill -0 1278431 00:21:48.659 21:27:03 -- common/autotest_common.sh@941 -- # uname 00:21:48.659 21:27:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:48.659 21:27:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1278431 00:21:48.659 21:27:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:48.659 21:27:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:48.659 21:27:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1278431' 00:21:48.659 killing process with pid 1278431 00:21:48.659 21:27:03 -- common/autotest_common.sh@955 -- # kill 1278431 00:21:48.659 [2024-04-24 21:27:03.610227] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:48.659 21:27:03 -- common/autotest_common.sh@960 -- # wait 1278431 00:21:49.226 21:27:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:49.226 21:27:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:49.226 21:27:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:49.226 21:27:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:49.226 21:27:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:49.226 21:27:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.226 21:27:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.226 21:27:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.767 21:27:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:51.767 00:21:51.767 real 0m9.801s 00:21:51.767 user 0m8.690s 00:21:51.767 sys 0m4.580s 00:21:51.767 21:27:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:51.767 21:27:06 -- common/autotest_common.sh@10 -- # set +x 00:21:51.767 ************************************ 00:21:51.767 END TEST nvmf_aer 00:21:51.767 ************************************ 00:21:51.767 21:27:06 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:51.767 21:27:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:51.767 21:27:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:51.767 21:27:06 -- common/autotest_common.sh@10 -- # set +x 00:21:51.767 ************************************ 00:21:51.767 START TEST nvmf_async_init 00:21:51.767 ************************************ 00:21:51.767 21:27:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:51.767 * Looking for test storage... 00:21:51.767 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:21:51.767 21:27:06 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.767 21:27:06 -- nvmf/common.sh@7 -- # uname -s 00:21:51.767 21:27:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.767 21:27:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.767 21:27:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.767 21:27:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.767 21:27:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.767 21:27:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.767 21:27:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.767 21:27:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.767 21:27:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.767 21:27:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.767 21:27:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:21:51.767 21:27:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:21:51.767 21:27:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.767 21:27:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.767 21:27:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:51.767 21:27:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.767 21:27:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:51.767 21:27:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.767 21:27:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.767 21:27:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.767 21:27:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.767 21:27:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.767 21:27:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.767 21:27:06 -- paths/export.sh@5 -- # export PATH 00:21:51.767 21:27:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.767 21:27:06 -- nvmf/common.sh@47 -- # : 0 00:21:51.767 21:27:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.767 21:27:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.767 21:27:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.767 21:27:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.767 21:27:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.767 21:27:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.767 21:27:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.767 21:27:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.767 21:27:06 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:51.767 21:27:06 -- host/async_init.sh@14 -- # null_block_size=512 00:21:51.767 21:27:06 -- host/async_init.sh@15 -- # null_bdev=null0 00:21:51.767 21:27:06 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:51.767 21:27:06 -- host/async_init.sh@20 -- # uuidgen 00:21:51.767 21:27:06 -- host/async_init.sh@20 -- # tr -d - 00:21:51.767 21:27:06 -- host/async_init.sh@20 -- # nguid=4af628c3e62443d6869d108bd6becc93 00:21:51.767 21:27:06 -- host/async_init.sh@22 -- # nvmftestinit 00:21:51.767 21:27:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:51.767 21:27:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.767 21:27:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:51.767 21:27:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:51.767 21:27:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:51.767 21:27:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.767 21:27:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.767 21:27:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.767 21:27:06 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:21:51.767 21:27:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:51.767 21:27:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:51.767 21:27:06 -- common/autotest_common.sh@10 -- # set +x 00:21:57.048 21:27:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:57.048 21:27:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:57.048 21:27:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:57.048 21:27:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:57.048 21:27:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:57.048 21:27:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:57.048 21:27:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:57.048 21:27:11 -- nvmf/common.sh@295 -- # net_devs=() 00:21:57.048 21:27:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:57.048 21:27:11 -- nvmf/common.sh@296 -- # e810=() 00:21:57.048 21:27:11 -- nvmf/common.sh@296 -- # local -ga e810 00:21:57.048 21:27:11 -- nvmf/common.sh@297 -- # x722=() 00:21:57.048 21:27:11 -- nvmf/common.sh@297 -- # local -ga x722 00:21:57.048 21:27:11 -- nvmf/common.sh@298 -- # mlx=() 00:21:57.048 21:27:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:57.048 21:27:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.048 21:27:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.048 21:27:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.048 21:27:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.048 21:27:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.048 21:27:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.048 21:27:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.048 21:27:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.048 21:27:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.048 21:27:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.048 21:27:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.048 21:27:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:57.048 21:27:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:57.048 21:27:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.048 21:27:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:57.048 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:57.048 21:27:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.048 21:27:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:57.048 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:57.048 21:27:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:57.048 21:27:11 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.048 21:27:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.048 21:27:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:57.048 21:27:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.048 21:27:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:57.048 Found net devices under 0000:27:00.0: cvl_0_0 00:21:57.048 21:27:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.048 21:27:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.048 21:27:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.048 21:27:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:57.048 21:27:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.048 21:27:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:57.048 Found net devices under 0000:27:00.1: cvl_0_1 00:21:57.048 21:27:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.048 21:27:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:57.048 21:27:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:57.048 21:27:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:57.048 21:27:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:57.048 21:27:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.048 21:27:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.048 21:27:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.048 21:27:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:57.048 21:27:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.048 21:27:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.048 21:27:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:57.048 21:27:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.048 21:27:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.048 21:27:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:57.048 21:27:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:57.048 21:27:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.048 21:27:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.048 21:27:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.048 21:27:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.048 21:27:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:57.048 21:27:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.048 21:27:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.048 21:27:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.048 21:27:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:57.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.729 ms 00:21:57.048 00:21:57.048 --- 10.0.0.2 ping statistics --- 00:21:57.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.048 rtt min/avg/max/mdev = 0.729/0.729/0.729/0.000 ms 00:21:57.048 21:27:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:21:57.048 00:21:57.048 --- 10.0.0.1 ping statistics --- 00:21:57.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.048 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:21:57.048 21:27:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.048 21:27:11 -- nvmf/common.sh@411 -- # return 0 00:21:57.049 21:27:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:57.049 21:27:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.049 21:27:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:57.049 21:27:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:57.049 21:27:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.049 21:27:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:57.049 21:27:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:57.049 21:27:11 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:57.049 21:27:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:57.049 21:27:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:57.049 21:27:11 -- common/autotest_common.sh@10 -- # set +x 00:21:57.049 21:27:11 -- nvmf/common.sh@470 -- # nvmfpid=1283200 00:21:57.049 21:27:11 -- nvmf/common.sh@471 -- # waitforlisten 1283200 00:21:57.049 21:27:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:57.049 21:27:11 -- common/autotest_common.sh@817 -- # '[' -z 1283200 ']' 00:21:57.049 21:27:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.049 21:27:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:57.049 21:27:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.049 21:27:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:57.049 21:27:11 -- common/autotest_common.sh@10 -- # set +x 00:21:57.049 [2024-04-24 21:27:11.696482] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:21:57.049 [2024-04-24 21:27:11.696588] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.049 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.049 [2024-04-24 21:27:11.822124] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.049 [2024-04-24 21:27:11.913618] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.049 [2024-04-24 21:27:11.913653] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.049 [2024-04-24 21:27:11.913663] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.049 [2024-04-24 21:27:11.913672] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.049 [2024-04-24 21:27:11.913679] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.049 [2024-04-24 21:27:11.913708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.679 21:27:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:57.679 21:27:12 -- common/autotest_common.sh@850 -- # return 0 00:21:57.679 21:27:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:57.679 21:27:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:57.679 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.679 21:27:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.679 21:27:12 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:57.679 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.679 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.679 [2024-04-24 21:27:12.435727] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.679 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.679 21:27:12 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:57.679 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.679 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.679 null0 00:21:57.679 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.679 21:27:12 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:57.679 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.679 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.679 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.679 21:27:12 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:57.679 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.679 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.679 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.679 21:27:12 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4af628c3e62443d6869d108bd6becc93 00:21:57.679 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.679 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.679 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.679 21:27:12 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:57.679 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.679 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.679 [2024-04-24 21:27:12.475939] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.679 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.679 21:27:12 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:57.679 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.679 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.939 nvme0n1 00:21:57.939 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.939 21:27:12 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:57.939 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.939 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.939 [ 00:21:57.939 { 00:21:57.939 "name": "nvme0n1", 00:21:57.939 "aliases": [ 00:21:57.939 "4af628c3-e624-43d6-869d-108bd6becc93" 00:21:57.939 ], 00:21:57.939 "product_name": "NVMe disk", 00:21:57.939 "block_size": 512, 00:21:57.939 "num_blocks": 2097152, 00:21:57.939 "uuid": "4af628c3-e624-43d6-869d-108bd6becc93", 00:21:57.939 "assigned_rate_limits": { 00:21:57.939 "rw_ios_per_sec": 0, 00:21:57.939 "rw_mbytes_per_sec": 0, 00:21:57.939 "r_mbytes_per_sec": 0, 00:21:57.939 "w_mbytes_per_sec": 0 00:21:57.939 }, 00:21:57.939 "claimed": false, 00:21:57.939 "zoned": false, 00:21:57.939 "supported_io_types": { 00:21:57.939 "read": true, 00:21:57.939 "write": true, 00:21:57.939 "unmap": false, 00:21:57.939 "write_zeroes": true, 00:21:57.939 "flush": true, 00:21:57.939 "reset": true, 00:21:57.939 "compare": true, 00:21:57.940 "compare_and_write": true, 00:21:57.940 "abort": true, 00:21:57.940 "nvme_admin": true, 00:21:57.940 "nvme_io": true 00:21:57.940 }, 00:21:57.940 "memory_domains": [ 00:21:57.940 { 00:21:57.940 "dma_device_id": "system", 00:21:57.940 "dma_device_type": 1 00:21:57.940 } 00:21:57.940 ], 00:21:57.940 "driver_specific": { 00:21:57.940 "nvme": [ 00:21:57.940 { 00:21:57.940 "trid": { 00:21:57.940 "trtype": "TCP", 00:21:57.940 "adrfam": "IPv4", 00:21:57.940 "traddr": "10.0.0.2", 00:21:57.940 "trsvcid": "4420", 00:21:57.940 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:57.940 }, 00:21:57.940 "ctrlr_data": { 00:21:57.940 "cntlid": 1, 00:21:57.940 "vendor_id": "0x8086", 00:21:57.940 "model_number": "SPDK bdev Controller", 00:21:57.940 "serial_number": "00000000000000000000", 00:21:57.940 "firmware_revision": "24.05", 00:21:57.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.940 "oacs": { 00:21:57.940 "security": 0, 00:21:57.940 "format": 0, 00:21:57.940 "firmware": 0, 00:21:57.940 "ns_manage": 0 00:21:57.940 }, 00:21:57.940 "multi_ctrlr": true, 00:21:57.940 "ana_reporting": false 00:21:57.940 }, 00:21:57.940 "vs": { 00:21:57.940 "nvme_version": "1.3" 00:21:57.940 }, 00:21:57.940 "ns_data": { 00:21:57.940 "id": 1, 00:21:57.940 "can_share": true 00:21:57.940 } 00:21:57.940 } 00:21:57.940 ], 00:21:57.940 "mp_policy": "active_passive" 00:21:57.940 } 00:21:57.940 } 00:21:57.940 ] 00:21:57.940 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.940 21:27:12 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:57.940 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.940 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.940 [2024-04-24 21:27:12.724732] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:57.940 [2024-04-24 21:27:12.724817] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006840 (9): Bad file descriptor 00:21:57.940 [2024-04-24 21:27:12.856394] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:57.940 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.940 21:27:12 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:57.940 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.940 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.940 [ 00:21:57.940 { 00:21:57.940 "name": "nvme0n1", 00:21:57.940 "aliases": [ 00:21:57.940 "4af628c3-e624-43d6-869d-108bd6becc93" 00:21:57.940 ], 00:21:57.940 "product_name": "NVMe disk", 00:21:57.940 "block_size": 512, 00:21:57.940 "num_blocks": 2097152, 00:21:57.940 "uuid": "4af628c3-e624-43d6-869d-108bd6becc93", 00:21:57.940 "assigned_rate_limits": { 00:21:57.940 "rw_ios_per_sec": 0, 00:21:57.940 "rw_mbytes_per_sec": 0, 00:21:57.940 "r_mbytes_per_sec": 0, 00:21:57.940 "w_mbytes_per_sec": 0 00:21:57.940 }, 00:21:57.940 "claimed": false, 00:21:57.940 "zoned": false, 00:21:57.940 "supported_io_types": { 00:21:57.940 "read": true, 00:21:57.940 "write": true, 00:21:57.940 "unmap": false, 00:21:57.940 "write_zeroes": true, 00:21:57.940 "flush": true, 00:21:57.940 "reset": true, 00:21:57.940 "compare": true, 00:21:57.940 "compare_and_write": true, 00:21:57.940 "abort": true, 00:21:57.940 "nvme_admin": true, 00:21:57.940 "nvme_io": true 00:21:57.940 }, 00:21:57.940 "memory_domains": [ 00:21:57.940 { 00:21:57.940 "dma_device_id": "system", 00:21:57.940 "dma_device_type": 1 00:21:57.940 } 00:21:57.940 ], 00:21:57.940 "driver_specific": { 00:21:57.940 "nvme": [ 00:21:57.940 { 00:21:57.940 "trid": { 00:21:57.940 "trtype": "TCP", 00:21:57.940 "adrfam": "IPv4", 00:21:57.940 "traddr": "10.0.0.2", 00:21:57.940 "trsvcid": "4420", 00:21:57.940 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:57.940 }, 00:21:57.940 "ctrlr_data": { 00:21:57.940 "cntlid": 2, 00:21:57.940 "vendor_id": "0x8086", 00:21:57.940 "model_number": "SPDK bdev Controller", 00:21:57.940 "serial_number": "00000000000000000000", 00:21:57.940 "firmware_revision": "24.05", 00:21:57.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.940 "oacs": { 00:21:57.940 "security": 0, 00:21:57.940 "format": 0, 00:21:57.940 "firmware": 0, 00:21:57.940 "ns_manage": 0 00:21:57.940 }, 00:21:57.940 "multi_ctrlr": true, 00:21:57.940 "ana_reporting": false 00:21:57.940 }, 00:21:57.940 "vs": { 00:21:57.940 "nvme_version": "1.3" 00:21:57.940 }, 00:21:57.940 "ns_data": { 00:21:57.940 "id": 1, 00:21:57.940 "can_share": true 00:21:57.940 } 00:21:57.940 } 00:21:57.940 ], 00:21:57.940 "mp_policy": "active_passive" 00:21:57.940 } 00:21:57.940 } 00:21:57.940 ] 00:21:57.940 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.940 21:27:12 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.940 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.940 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.940 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.940 21:27:12 -- host/async_init.sh@53 -- # mktemp 00:21:57.940 21:27:12 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.B6uFbTVAXX 00:21:57.940 21:27:12 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:57.940 21:27:12 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.B6uFbTVAXX 00:21:57.940 21:27:12 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:57.940 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.940 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:58.201 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.201 21:27:12 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:58.201 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.201 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:58.201 [2024-04-24 21:27:12.908897] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:58.201 [2024-04-24 21:27:12.909047] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:58.201 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.201 21:27:12 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B6uFbTVAXX 00:21:58.201 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.201 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:58.201 [2024-04-24 21:27:12.916912] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:58.201 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.201 21:27:12 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B6uFbTVAXX 00:21:58.201 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.201 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:58.201 [2024-04-24 21:27:12.924897] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.201 [2024-04-24 21:27:12.924976] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:58.201 nvme0n1 00:21:58.201 21:27:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.201 21:27:12 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:58.201 21:27:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.201 21:27:12 -- common/autotest_common.sh@10 -- # set +x 00:21:58.201 [ 00:21:58.201 { 00:21:58.201 "name": "nvme0n1", 00:21:58.201 "aliases": [ 00:21:58.201 "4af628c3-e624-43d6-869d-108bd6becc93" 00:21:58.201 ], 00:21:58.201 "product_name": "NVMe disk", 00:21:58.201 "block_size": 512, 00:21:58.201 "num_blocks": 2097152, 00:21:58.201 "uuid": "4af628c3-e624-43d6-869d-108bd6becc93", 00:21:58.201 "assigned_rate_limits": { 00:21:58.201 "rw_ios_per_sec": 0, 00:21:58.201 "rw_mbytes_per_sec": 0, 00:21:58.201 "r_mbytes_per_sec": 0, 00:21:58.201 "w_mbytes_per_sec": 0 00:21:58.201 }, 00:21:58.201 "claimed": false, 00:21:58.201 "zoned": false, 00:21:58.201 "supported_io_types": { 00:21:58.201 "read": true, 00:21:58.201 "write": true, 00:21:58.201 "unmap": false, 00:21:58.201 "write_zeroes": true, 00:21:58.201 "flush": true, 00:21:58.201 "reset": true, 00:21:58.201 "compare": true, 00:21:58.201 "compare_and_write": true, 00:21:58.201 "abort": true, 00:21:58.201 "nvme_admin": true, 00:21:58.201 "nvme_io": true 00:21:58.201 }, 00:21:58.201 "memory_domains": [ 00:21:58.201 { 00:21:58.201 "dma_device_id": "system", 00:21:58.201 "dma_device_type": 1 00:21:58.201 } 00:21:58.201 ], 00:21:58.201 "driver_specific": { 00:21:58.201 "nvme": [ 00:21:58.201 { 00:21:58.201 "trid": { 00:21:58.201 "trtype": "TCP", 00:21:58.201 "adrfam": "IPv4", 00:21:58.201 "traddr": "10.0.0.2", 00:21:58.201 "trsvcid": "4421", 00:21:58.201 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:58.201 }, 00:21:58.201 "ctrlr_data": { 00:21:58.201 "cntlid": 3, 00:21:58.201 "vendor_id": "0x8086", 00:21:58.201 "model_number": "SPDK bdev Controller", 00:21:58.201 "serial_number": "00000000000000000000", 00:21:58.201 "firmware_revision": "24.05", 00:21:58.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:58.201 "oacs": { 00:21:58.201 "security": 0, 00:21:58.201 "format": 0, 00:21:58.201 "firmware": 0, 00:21:58.201 "ns_manage": 0 00:21:58.201 }, 00:21:58.201 "multi_ctrlr": true, 00:21:58.201 "ana_reporting": false 00:21:58.201 }, 00:21:58.201 "vs": { 00:21:58.201 "nvme_version": "1.3" 00:21:58.201 }, 00:21:58.201 "ns_data": { 00:21:58.201 "id": 1, 00:21:58.201 "can_share": true 00:21:58.201 } 00:21:58.201 } 00:21:58.201 ], 00:21:58.201 "mp_policy": "active_passive" 00:21:58.201 } 00:21:58.201 } 00:21:58.201 ] 00:21:58.201 21:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.201 21:27:13 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.201 21:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.201 21:27:13 -- common/autotest_common.sh@10 -- # set +x 00:21:58.201 21:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.201 21:27:13 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.B6uFbTVAXX 00:21:58.201 21:27:13 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:58.201 21:27:13 -- host/async_init.sh@78 -- # nvmftestfini 00:21:58.201 21:27:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:58.201 21:27:13 -- nvmf/common.sh@117 -- # sync 00:21:58.201 21:27:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:58.201 21:27:13 -- nvmf/common.sh@120 -- # set +e 00:21:58.201 21:27:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:58.201 21:27:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:58.201 rmmod nvme_tcp 00:21:58.201 rmmod nvme_fabrics 00:21:58.201 rmmod nvme_keyring 00:21:58.201 21:27:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:58.201 21:27:13 -- nvmf/common.sh@124 -- # set -e 00:21:58.201 21:27:13 -- nvmf/common.sh@125 -- # return 0 00:21:58.201 21:27:13 -- nvmf/common.sh@478 -- # '[' -n 1283200 ']' 00:21:58.201 21:27:13 -- nvmf/common.sh@479 -- # killprocess 1283200 00:21:58.201 21:27:13 -- common/autotest_common.sh@936 -- # '[' -z 1283200 ']' 00:21:58.201 21:27:13 -- common/autotest_common.sh@940 -- # kill -0 1283200 00:21:58.201 21:27:13 -- common/autotest_common.sh@941 -- # uname 00:21:58.201 21:27:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:58.201 21:27:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1283200 00:21:58.201 21:27:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:58.201 21:27:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:58.201 21:27:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1283200' 00:21:58.201 killing process with pid 1283200 00:21:58.201 21:27:13 -- common/autotest_common.sh@955 -- # kill 1283200 00:21:58.201 [2024-04-24 21:27:13.158978] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:58.201 [2024-04-24 21:27:13.159016] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:58.201 21:27:13 -- common/autotest_common.sh@960 -- # wait 1283200 00:21:58.773 21:27:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:58.773 21:27:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:58.773 21:27:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:58.773 21:27:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.773 21:27:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:58.773 21:27:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.773 21:27:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.773 21:27:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.316 21:27:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:01.316 00:22:01.316 real 0m9.425s 00:22:01.316 user 0m3.567s 00:22:01.316 sys 0m4.258s 00:22:01.316 21:27:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:01.316 21:27:15 -- common/autotest_common.sh@10 -- # set +x 00:22:01.316 ************************************ 00:22:01.316 END TEST nvmf_async_init 00:22:01.316 ************************************ 00:22:01.316 21:27:15 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:01.316 21:27:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:01.316 21:27:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:01.316 21:27:15 -- common/autotest_common.sh@10 -- # set +x 00:22:01.316 ************************************ 00:22:01.316 START TEST dma 00:22:01.316 ************************************ 00:22:01.317 21:27:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:01.317 * Looking for test storage... 00:22:01.317 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:01.317 21:27:15 -- host/dma.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.317 21:27:15 -- nvmf/common.sh@7 -- # uname -s 00:22:01.317 21:27:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.317 21:27:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.317 21:27:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.317 21:27:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.317 21:27:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.317 21:27:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.317 21:27:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.317 21:27:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.317 21:27:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.317 21:27:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.317 21:27:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:22:01.317 21:27:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:22:01.317 21:27:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.317 21:27:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.317 21:27:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:01.317 21:27:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.317 21:27:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:01.317 21:27:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.317 21:27:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.317 21:27:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.317 21:27:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.317 21:27:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.317 21:27:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.317 21:27:15 -- paths/export.sh@5 -- # export PATH 00:22:01.317 21:27:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.317 21:27:15 -- nvmf/common.sh@47 -- # : 0 00:22:01.317 21:27:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.317 21:27:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.317 21:27:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.317 21:27:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.317 21:27:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.317 21:27:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.317 21:27:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.317 21:27:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.317 21:27:15 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:01.317 21:27:15 -- host/dma.sh@13 -- # exit 0 00:22:01.317 00:22:01.317 real 0m0.083s 00:22:01.317 user 0m0.038s 00:22:01.317 sys 0m0.051s 00:22:01.317 21:27:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:01.317 21:27:15 -- common/autotest_common.sh@10 -- # set +x 00:22:01.317 ************************************ 00:22:01.317 END TEST dma 00:22:01.317 ************************************ 00:22:01.317 21:27:15 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:01.317 21:27:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:01.317 21:27:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:01.317 21:27:15 -- common/autotest_common.sh@10 -- # set +x 00:22:01.317 ************************************ 00:22:01.317 START TEST nvmf_identify 00:22:01.317 ************************************ 00:22:01.317 21:27:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:01.317 * Looking for test storage... 00:22:01.317 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:01.317 21:27:16 -- host/identify.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.317 21:27:16 -- nvmf/common.sh@7 -- # uname -s 00:22:01.317 21:27:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.317 21:27:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.317 21:27:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.317 21:27:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.317 21:27:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.317 21:27:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.317 21:27:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.317 21:27:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.317 21:27:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.317 21:27:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.317 21:27:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:22:01.317 21:27:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:22:01.317 21:27:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.317 21:27:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.317 21:27:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:01.317 21:27:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.317 21:27:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:01.317 21:27:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.317 21:27:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.317 21:27:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.317 21:27:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.317 21:27:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.317 21:27:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.317 21:27:16 -- paths/export.sh@5 -- # export PATH 00:22:01.317 21:27:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.317 21:27:16 -- nvmf/common.sh@47 -- # : 0 00:22:01.317 21:27:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.317 21:27:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.317 21:27:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.317 21:27:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.317 21:27:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.317 21:27:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.317 21:27:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.317 21:27:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.317 21:27:16 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:01.317 21:27:16 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:01.317 21:27:16 -- host/identify.sh@14 -- # nvmftestinit 00:22:01.317 21:27:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:01.317 21:27:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.317 21:27:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:01.317 21:27:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:01.317 21:27:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:01.317 21:27:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.317 21:27:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.317 21:27:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.317 21:27:16 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:22:01.317 21:27:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:01.317 21:27:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:01.317 21:27:16 -- common/autotest_common.sh@10 -- # set +x 00:22:06.594 21:27:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:06.594 21:27:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.594 21:27:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.594 21:27:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.594 21:27:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.594 21:27:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.594 21:27:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.594 21:27:21 -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.594 21:27:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.594 21:27:21 -- nvmf/common.sh@296 -- # e810=() 00:22:06.594 21:27:21 -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.594 21:27:21 -- nvmf/common.sh@297 -- # x722=() 00:22:06.594 21:27:21 -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.594 21:27:21 -- nvmf/common.sh@298 -- # mlx=() 00:22:06.594 21:27:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.594 21:27:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.594 21:27:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.594 21:27:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.594 21:27:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.594 21:27:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.594 21:27:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.594 21:27:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.594 21:27:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.594 21:27:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.594 21:27:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.594 21:27:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.594 21:27:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.594 21:27:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.594 21:27:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.594 21:27:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:06.594 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:06.594 21:27:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.594 21:27:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:06.594 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:06.594 21:27:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.594 21:27:21 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.594 21:27:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.594 21:27:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:06.594 21:27:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.594 21:27:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:06.594 Found net devices under 0000:27:00.0: cvl_0_0 00:22:06.594 21:27:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.594 21:27:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.594 21:27:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.594 21:27:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:06.594 21:27:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.594 21:27:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:06.594 Found net devices under 0000:27:00.1: cvl_0_1 00:22:06.594 21:27:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.594 21:27:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:06.594 21:27:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:06.594 21:27:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:06.594 21:27:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:06.594 21:27:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.594 21:27:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.594 21:27:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.594 21:27:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:06.594 21:27:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.594 21:27:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.594 21:27:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:06.594 21:27:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.594 21:27:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.594 21:27:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:06.594 21:27:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:06.594 21:27:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.594 21:27:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.594 21:27:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.594 21:27:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.594 21:27:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:06.594 21:27:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.853 21:27:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.853 21:27:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.853 21:27:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:06.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:22:06.853 00:22:06.853 --- 10.0.0.2 ping statistics --- 00:22:06.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.853 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:22:06.853 21:27:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:22:06.853 00:22:06.853 --- 10.0.0.1 ping statistics --- 00:22:06.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.853 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:22:06.853 21:27:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.853 21:27:21 -- nvmf/common.sh@411 -- # return 0 00:22:06.853 21:27:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:06.853 21:27:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.853 21:27:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:06.853 21:27:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:06.853 21:27:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.853 21:27:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:06.853 21:27:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:06.853 21:27:21 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:06.853 21:27:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:06.853 21:27:21 -- common/autotest_common.sh@10 -- # set +x 00:22:06.853 21:27:21 -- host/identify.sh@19 -- # nvmfpid=1287485 00:22:06.853 21:27:21 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:06.853 21:27:21 -- host/identify.sh@23 -- # waitforlisten 1287485 00:22:06.853 21:27:21 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:06.853 21:27:21 -- common/autotest_common.sh@817 -- # '[' -z 1287485 ']' 00:22:06.853 21:27:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.853 21:27:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:06.853 21:27:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.853 21:27:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:06.853 21:27:21 -- common/autotest_common.sh@10 -- # set +x 00:22:06.853 [2024-04-24 21:27:21.725884] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:22:06.853 [2024-04-24 21:27:21.725983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.853 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.112 [2024-04-24 21:27:21.843051] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:07.112 [2024-04-24 21:27:21.937870] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.112 [2024-04-24 21:27:21.937904] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.112 [2024-04-24 21:27:21.937915] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.112 [2024-04-24 21:27:21.937925] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.112 [2024-04-24 21:27:21.937932] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.112 [2024-04-24 21:27:21.938000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.112 [2024-04-24 21:27:21.938095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.112 [2024-04-24 21:27:21.938196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.112 [2024-04-24 21:27:21.938207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:07.682 21:27:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:07.682 21:27:22 -- common/autotest_common.sh@850 -- # return 0 00:22:07.682 21:27:22 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:07.682 21:27:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.682 21:27:22 -- common/autotest_common.sh@10 -- # set +x 00:22:07.682 [2024-04-24 21:27:22.431602] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.682 21:27:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.682 21:27:22 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:07.682 21:27:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:07.682 21:27:22 -- common/autotest_common.sh@10 -- # set +x 00:22:07.682 21:27:22 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:07.682 21:27:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.682 21:27:22 -- common/autotest_common.sh@10 -- # set +x 00:22:07.682 Malloc0 00:22:07.682 21:27:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.682 21:27:22 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:07.682 21:27:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.682 21:27:22 -- common/autotest_common.sh@10 -- # set +x 00:22:07.682 21:27:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.682 21:27:22 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:07.682 21:27:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.682 21:27:22 -- common/autotest_common.sh@10 -- # set +x 00:22:07.682 21:27:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.682 21:27:22 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.682 21:27:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.682 21:27:22 -- common/autotest_common.sh@10 -- # set +x 00:22:07.682 [2024-04-24 21:27:22.540377] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.682 21:27:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.682 21:27:22 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:07.683 21:27:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.683 21:27:22 -- common/autotest_common.sh@10 -- # set +x 00:22:07.683 21:27:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.683 21:27:22 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:07.683 21:27:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.683 21:27:22 -- common/autotest_common.sh@10 -- # set +x 00:22:07.683 [2024-04-24 21:27:22.556103] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:07.683 [ 00:22:07.683 { 00:22:07.683 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:07.683 "subtype": "Discovery", 00:22:07.683 "listen_addresses": [ 00:22:07.683 { 00:22:07.683 "transport": "TCP", 00:22:07.683 "trtype": "TCP", 00:22:07.683 "adrfam": "IPv4", 00:22:07.683 "traddr": "10.0.0.2", 00:22:07.683 "trsvcid": "4420" 00:22:07.683 } 00:22:07.683 ], 00:22:07.683 "allow_any_host": true, 00:22:07.683 "hosts": [] 00:22:07.683 }, 00:22:07.683 { 00:22:07.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.683 "subtype": "NVMe", 00:22:07.683 "listen_addresses": [ 00:22:07.683 { 00:22:07.683 "transport": "TCP", 00:22:07.683 "trtype": "TCP", 00:22:07.683 "adrfam": "IPv4", 00:22:07.683 "traddr": "10.0.0.2", 00:22:07.683 "trsvcid": "4420" 00:22:07.683 } 00:22:07.683 ], 00:22:07.683 "allow_any_host": true, 00:22:07.683 "hosts": [], 00:22:07.683 "serial_number": "SPDK00000000000001", 00:22:07.683 "model_number": "SPDK bdev Controller", 00:22:07.683 "max_namespaces": 32, 00:22:07.683 "min_cntlid": 1, 00:22:07.683 "max_cntlid": 65519, 00:22:07.683 "namespaces": [ 00:22:07.683 { 00:22:07.683 "nsid": 1, 00:22:07.683 "bdev_name": "Malloc0", 00:22:07.683 "name": "Malloc0", 00:22:07.683 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:07.683 "eui64": "ABCDEF0123456789", 00:22:07.683 "uuid": "1d2d653f-3ce2-460c-ab11-8a9c030187e0" 00:22:07.683 } 00:22:07.683 ] 00:22:07.683 } 00:22:07.683 ] 00:22:07.683 21:27:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.683 21:27:22 -- host/identify.sh@39 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:07.683 [2024-04-24 21:27:22.609487] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:22:07.683 [2024-04-24 21:27:22.609585] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287796 ] 00:22:07.683 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.947 [2024-04-24 21:27:22.668062] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:07.947 [2024-04-24 21:27:22.668157] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:07.947 [2024-04-24 21:27:22.668168] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:07.947 [2024-04-24 21:27:22.668190] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:07.947 [2024-04-24 21:27:22.668205] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:07.947 [2024-04-24 21:27:22.668586] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:07.947 [2024-04-24 21:27:22.668634] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:22:07.947 [2024-04-24 21:27:22.683281] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:07.947 [2024-04-24 21:27:22.683302] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:07.947 [2024-04-24 21:27:22.683310] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:07.947 [2024-04-24 21:27:22.683316] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:07.947 [2024-04-24 21:27:22.683373] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.947 [2024-04-24 21:27:22.683383] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.947 [2024-04-24 21:27:22.683391] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:07.947 [2024-04-24 21:27:22.683422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:07.947 [2024-04-24 21:27:22.683446] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:07.947 [2024-04-24 21:27:22.691287] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.947 [2024-04-24 21:27:22.691303] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.947 [2024-04-24 21:27:22.691308] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.947 [2024-04-24 21:27:22.691315] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:07.947 [2024-04-24 21:27:22.691333] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:07.947 [2024-04-24 21:27:22.691347] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:07.947 [2024-04-24 21:27:22.691355] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:07.947 [2024-04-24 21:27:22.691375] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.947 [2024-04-24 21:27:22.691381] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.947 [2024-04-24 21:27:22.691391] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:07.947 [2024-04-24 21:27:22.691410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-04-24 21:27:22.691428] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:07.948 [2024-04-24 21:27:22.691587] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.948 [2024-04-24 21:27:22.691595] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.948 [2024-04-24 21:27:22.691606] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.691611] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:07.948 [2024-04-24 21:27:22.691621] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:07.948 [2024-04-24 21:27:22.691629] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:07.948 [2024-04-24 21:27:22.691638] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.691646] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.691652] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:07.948 [2024-04-24 21:27:22.691662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-04-24 21:27:22.691675] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:07.948 [2024-04-24 21:27:22.691804] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.948 [2024-04-24 21:27:22.691812] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.948 [2024-04-24 21:27:22.691816] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.691823] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:07.948 [2024-04-24 21:27:22.691831] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:07.948 [2024-04-24 21:27:22.691841] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:07.948 [2024-04-24 21:27:22.691848] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.691854] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.691859] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:07.948 [2024-04-24 21:27:22.691870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-04-24 21:27:22.691880] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:07.948 [2024-04-24 21:27:22.692003] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.948 [2024-04-24 21:27:22.692009] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.948 [2024-04-24 21:27:22.692013] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.692018] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:07.948 [2024-04-24 21:27:22.692024] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:07.948 [2024-04-24 21:27:22.692034] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.692039] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.692045] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:07.948 [2024-04-24 21:27:22.692054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-04-24 21:27:22.692069] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:07.948 [2024-04-24 21:27:22.692198] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.948 [2024-04-24 21:27:22.692204] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.948 [2024-04-24 21:27:22.692208] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.692214] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:07.948 [2024-04-24 21:27:22.692221] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:07.948 [2024-04-24 21:27:22.692227] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:07.948 [2024-04-24 21:27:22.692235] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:07.948 [2024-04-24 21:27:22.692344] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:07.948 [2024-04-24 21:27:22.692354] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:07.948 [2024-04-24 21:27:22.692367] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.692372] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.692377] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:07.948 [2024-04-24 21:27:22.692388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-04-24 21:27:22.692399] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:07.948 [2024-04-24 21:27:22.692534] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.948 [2024-04-24 21:27:22.692540] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.948 [2024-04-24 21:27:22.692544] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.692548] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:07.948 [2024-04-24 21:27:22.692554] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:07.948 [2024-04-24 21:27:22.692566] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.692574] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.692579] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:07.948 [2024-04-24 21:27:22.692588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-04-24 21:27:22.692598] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:07.948 [2024-04-24 21:27:22.692726] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.948 [2024-04-24 21:27:22.692733] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.948 [2024-04-24 21:27:22.692737] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.692741] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:07.948 [2024-04-24 21:27:22.692747] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:07.948 [2024-04-24 21:27:22.692753] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:07.948 [2024-04-24 21:27:22.692762] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:07.948 [2024-04-24 21:27:22.692775] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:07.948 [2024-04-24 21:27:22.692789] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.692794] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:07.948 [2024-04-24 21:27:22.692805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-04-24 21:27:22.692821] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:07.948 [2024-04-24 21:27:22.692989] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:07.948 [2024-04-24 21:27:22.692995] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:07.948 [2024-04-24 21:27:22.693000] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.693005] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:22:07.948 [2024-04-24 21:27:22.693012] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:22:07.948 [2024-04-24 21:27:22.693019] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.693029] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.693036] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.693106] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.948 [2024-04-24 21:27:22.693112] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.948 [2024-04-24 21:27:22.693116] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.948 [2024-04-24 21:27:22.693122] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:07.948 [2024-04-24 21:27:22.693135] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:07.948 [2024-04-24 21:27:22.693142] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:07.948 [2024-04-24 21:27:22.693148] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:07.948 [2024-04-24 21:27:22.693159] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:07.948 [2024-04-24 21:27:22.693166] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:07.948 [2024-04-24 21:27:22.693172] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:07.949 [2024-04-24 21:27:22.693181] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:07.949 [2024-04-24 21:27:22.693191] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693196] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693201] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:07.949 [2024-04-24 21:27:22.693211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:07.949 [2024-04-24 21:27:22.693221] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:07.949 [2024-04-24 21:27:22.693363] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.949 [2024-04-24 21:27:22.693369] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.949 [2024-04-24 21:27:22.693373] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693379] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:07.949 [2024-04-24 21:27:22.693390] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693395] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693402] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:07.949 [2024-04-24 21:27:22.693412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.949 [2024-04-24 21:27:22.693419] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693423] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693428] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:22:07.949 [2024-04-24 21:27:22.693435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.949 [2024-04-24 21:27:22.693441] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693446] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693450] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:22:07.949 [2024-04-24 21:27:22.693460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.949 [2024-04-24 21:27:22.693466] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693470] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693476] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.949 [2024-04-24 21:27:22.693483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.949 [2024-04-24 21:27:22.693488] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:07.949 [2024-04-24 21:27:22.693498] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:07.949 [2024-04-24 21:27:22.693505] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693510] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:22:07.949 [2024-04-24 21:27:22.693519] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-04-24 21:27:22.693532] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:07.949 [2024-04-24 21:27:22.693537] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:22:07.949 [2024-04-24 21:27:22.693542] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:22:07.949 [2024-04-24 21:27:22.693549] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.949 [2024-04-24 21:27:22.693554] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:07.949 [2024-04-24 21:27:22.693713] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.949 [2024-04-24 21:27:22.693720] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.949 [2024-04-24 21:27:22.693724] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693728] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:22:07.949 [2024-04-24 21:27:22.693736] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:07.949 [2024-04-24 21:27:22.693743] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:07.949 [2024-04-24 21:27:22.693757] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693763] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:22:07.949 [2024-04-24 21:27:22.693777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-04-24 21:27:22.693787] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:07.949 [2024-04-24 21:27:22.693925] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:07.949 [2024-04-24 21:27:22.693933] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:07.949 [2024-04-24 21:27:22.693937] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693942] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:22:07.949 [2024-04-24 21:27:22.693948] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:22:07.949 [2024-04-24 21:27:22.693956] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693993] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.693998] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.734514] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.949 [2024-04-24 21:27:22.734531] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.949 [2024-04-24 21:27:22.734539] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.734545] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:22:07.949 [2024-04-24 21:27:22.734568] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:07.949 [2024-04-24 21:27:22.734614] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.734620] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:22:07.949 [2024-04-24 21:27:22.734633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-04-24 21:27:22.734642] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.734650] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.734655] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:22:07.949 [2024-04-24 21:27:22.734664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.949 [2024-04-24 21:27:22.734680] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:07.949 [2024-04-24 21:27:22.734688] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:07.949 [2024-04-24 21:27:22.734914] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:07.949 [2024-04-24 21:27:22.734921] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:07.949 [2024-04-24 21:27:22.734926] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.734932] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=1024, cccid=4 00:22:07.949 [2024-04-24 21:27:22.734938] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=1024 00:22:07.949 [2024-04-24 21:27:22.734943] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.734952] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.734958] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.734966] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.949 [2024-04-24 21:27:22.734973] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.949 [2024-04-24 21:27:22.734977] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.734982] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:22:07.949 [2024-04-24 21:27:22.779276] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.949 [2024-04-24 21:27:22.779290] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.949 [2024-04-24 21:27:22.779295] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.779300] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:22:07.949 [2024-04-24 21:27:22.779323] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.779329] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:22:07.949 [2024-04-24 21:27:22.779341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-04-24 21:27:22.779358] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:07.949 [2024-04-24 21:27:22.779526] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:07.949 [2024-04-24 21:27:22.779532] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:07.949 [2024-04-24 21:27:22.779536] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.779543] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=3072, cccid=4 00:22:07.949 [2024-04-24 21:27:22.779549] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=3072 00:22:07.949 [2024-04-24 21:27:22.779554] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.779562] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.779566] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.779638] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.949 [2024-04-24 21:27:22.779644] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.949 [2024-04-24 21:27:22.779648] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.949 [2024-04-24 21:27:22.779652] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:22:07.949 [2024-04-24 21:27:22.779663] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.950 [2024-04-24 21:27:22.779669] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:22:07.950 [2024-04-24 21:27:22.779678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-04-24 21:27:22.779693] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:07.950 [2024-04-24 21:27:22.779834] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:07.950 [2024-04-24 21:27:22.779840] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:07.950 [2024-04-24 21:27:22.779844] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:07.950 [2024-04-24 21:27:22.779848] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8, cccid=4 00:22:07.950 [2024-04-24 21:27:22.779854] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=8 00:22:07.950 [2024-04-24 21:27:22.779858] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.950 [2024-04-24 21:27:22.779865] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:07.950 [2024-04-24 21:27:22.779869] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:07.950 [2024-04-24 21:27:22.820527] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.950 [2024-04-24 21:27:22.820546] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.950 [2024-04-24 21:27:22.820550] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.950 [2024-04-24 21:27:22.820555] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:22:07.950 ===================================================== 00:22:07.950 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:07.950 ===================================================== 00:22:07.950 Controller Capabilities/Features 00:22:07.950 ================================ 00:22:07.950 Vendor ID: 0000 00:22:07.950 Subsystem Vendor ID: 0000 00:22:07.950 Serial Number: .................... 00:22:07.950 Model Number: ........................................ 00:22:07.950 Firmware Version: 24.05 00:22:07.950 Recommended Arb Burst: 0 00:22:07.950 IEEE OUI Identifier: 00 00 00 00:22:07.950 Multi-path I/O 00:22:07.950 May have multiple subsystem ports: No 00:22:07.950 May have multiple controllers: No 00:22:07.950 Associated with SR-IOV VF: No 00:22:07.950 Max Data Transfer Size: 131072 00:22:07.950 Max Number of Namespaces: 0 00:22:07.950 Max Number of I/O Queues: 1024 00:22:07.950 NVMe Specification Version (VS): 1.3 00:22:07.950 NVMe Specification Version (Identify): 1.3 00:22:07.950 Maximum Queue Entries: 128 00:22:07.950 Contiguous Queues Required: Yes 00:22:07.950 Arbitration Mechanisms Supported 00:22:07.950 Weighted Round Robin: Not Supported 00:22:07.950 Vendor Specific: Not Supported 00:22:07.950 Reset Timeout: 15000 ms 00:22:07.950 Doorbell Stride: 4 bytes 00:22:07.950 NVM Subsystem Reset: Not Supported 00:22:07.950 Command Sets Supported 00:22:07.950 NVM Command Set: Supported 00:22:07.950 Boot Partition: Not Supported 00:22:07.950 Memory Page Size Minimum: 4096 bytes 00:22:07.950 Memory Page Size Maximum: 4096 bytes 00:22:07.950 Persistent Memory Region: Not Supported 00:22:07.950 Optional Asynchronous Events Supported 00:22:07.950 Namespace Attribute Notices: Not Supported 00:22:07.950 Firmware Activation Notices: Not Supported 00:22:07.950 ANA Change Notices: Not Supported 00:22:07.950 PLE Aggregate Log Change Notices: Not Supported 00:22:07.950 LBA Status Info Alert Notices: Not Supported 00:22:07.950 EGE Aggregate Log Change Notices: Not Supported 00:22:07.950 Normal NVM Subsystem Shutdown event: Not Supported 00:22:07.950 Zone Descriptor Change Notices: Not Supported 00:22:07.950 Discovery Log Change Notices: Supported 00:22:07.950 Controller Attributes 00:22:07.950 128-bit Host Identifier: Not Supported 00:22:07.950 Non-Operational Permissive Mode: Not Supported 00:22:07.950 NVM Sets: Not Supported 00:22:07.950 Read Recovery Levels: Not Supported 00:22:07.950 Endurance Groups: Not Supported 00:22:07.950 Predictable Latency Mode: Not Supported 00:22:07.950 Traffic Based Keep ALive: Not Supported 00:22:07.950 Namespace Granularity: Not Supported 00:22:07.950 SQ Associations: Not Supported 00:22:07.950 UUID List: Not Supported 00:22:07.950 Multi-Domain Subsystem: Not Supported 00:22:07.950 Fixed Capacity Management: Not Supported 00:22:07.950 Variable Capacity Management: Not Supported 00:22:07.950 Delete Endurance Group: Not Supported 00:22:07.950 Delete NVM Set: Not Supported 00:22:07.950 Extended LBA Formats Supported: Not Supported 00:22:07.950 Flexible Data Placement Supported: Not Supported 00:22:07.950 00:22:07.950 Controller Memory Buffer Support 00:22:07.950 ================================ 00:22:07.950 Supported: No 00:22:07.950 00:22:07.950 Persistent Memory Region Support 00:22:07.950 ================================ 00:22:07.950 Supported: No 00:22:07.950 00:22:07.950 Admin Command Set Attributes 00:22:07.950 ============================ 00:22:07.950 Security Send/Receive: Not Supported 00:22:07.950 Format NVM: Not Supported 00:22:07.950 Firmware Activate/Download: Not Supported 00:22:07.950 Namespace Management: Not Supported 00:22:07.950 Device Self-Test: Not Supported 00:22:07.950 Directives: Not Supported 00:22:07.950 NVMe-MI: Not Supported 00:22:07.950 Virtualization Management: Not Supported 00:22:07.950 Doorbell Buffer Config: Not Supported 00:22:07.950 Get LBA Status Capability: Not Supported 00:22:07.950 Command & Feature Lockdown Capability: Not Supported 00:22:07.950 Abort Command Limit: 1 00:22:07.950 Async Event Request Limit: 4 00:22:07.950 Number of Firmware Slots: N/A 00:22:07.950 Firmware Slot 1 Read-Only: N/A 00:22:07.950 Firmware Activation Without Reset: N/A 00:22:07.950 Multiple Update Detection Support: N/A 00:22:07.950 Firmware Update Granularity: No Information Provided 00:22:07.950 Per-Namespace SMART Log: No 00:22:07.950 Asymmetric Namespace Access Log Page: Not Supported 00:22:07.950 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:07.950 Command Effects Log Page: Not Supported 00:22:07.950 Get Log Page Extended Data: Supported 00:22:07.950 Telemetry Log Pages: Not Supported 00:22:07.950 Persistent Event Log Pages: Not Supported 00:22:07.950 Supported Log Pages Log Page: May Support 00:22:07.950 Commands Supported & Effects Log Page: Not Supported 00:22:07.950 Feature Identifiers & Effects Log Page:May Support 00:22:07.950 NVMe-MI Commands & Effects Log Page: May Support 00:22:07.950 Data Area 4 for Telemetry Log: Not Supported 00:22:07.950 Error Log Page Entries Supported: 128 00:22:07.950 Keep Alive: Not Supported 00:22:07.950 00:22:07.950 NVM Command Set Attributes 00:22:07.950 ========================== 00:22:07.950 Submission Queue Entry Size 00:22:07.950 Max: 1 00:22:07.950 Min: 1 00:22:07.950 Completion Queue Entry Size 00:22:07.950 Max: 1 00:22:07.950 Min: 1 00:22:07.950 Number of Namespaces: 0 00:22:07.950 Compare Command: Not Supported 00:22:07.950 Write Uncorrectable Command: Not Supported 00:22:07.950 Dataset Management Command: Not Supported 00:22:07.950 Write Zeroes Command: Not Supported 00:22:07.950 Set Features Save Field: Not Supported 00:22:07.950 Reservations: Not Supported 00:22:07.950 Timestamp: Not Supported 00:22:07.950 Copy: Not Supported 00:22:07.950 Volatile Write Cache: Not Present 00:22:07.950 Atomic Write Unit (Normal): 1 00:22:07.950 Atomic Write Unit (PFail): 1 00:22:07.950 Atomic Compare & Write Unit: 1 00:22:07.950 Fused Compare & Write: Supported 00:22:07.950 Scatter-Gather List 00:22:07.950 SGL Command Set: Supported 00:22:07.950 SGL Keyed: Supported 00:22:07.950 SGL Bit Bucket Descriptor: Not Supported 00:22:07.950 SGL Metadata Pointer: Not Supported 00:22:07.950 Oversized SGL: Not Supported 00:22:07.950 SGL Metadata Address: Not Supported 00:22:07.950 SGL Offset: Supported 00:22:07.950 Transport SGL Data Block: Not Supported 00:22:07.950 Replay Protected Memory Block: Not Supported 00:22:07.950 00:22:07.950 Firmware Slot Information 00:22:07.950 ========================= 00:22:07.950 Active slot: 0 00:22:07.950 00:22:07.950 00:22:07.950 Error Log 00:22:07.950 ========= 00:22:07.950 00:22:07.950 Active Namespaces 00:22:07.950 ================= 00:22:07.950 Discovery Log Page 00:22:07.950 ================== 00:22:07.950 Generation Counter: 2 00:22:07.950 Number of Records: 2 00:22:07.950 Record Format: 0 00:22:07.950 00:22:07.950 Discovery Log Entry 0 00:22:07.950 ---------------------- 00:22:07.950 Transport Type: 3 (TCP) 00:22:07.950 Address Family: 1 (IPv4) 00:22:07.950 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:07.950 Entry Flags: 00:22:07.950 Duplicate Returned Information: 1 00:22:07.950 Explicit Persistent Connection Support for Discovery: 1 00:22:07.950 Transport Requirements: 00:22:07.950 Secure Channel: Not Required 00:22:07.950 Port ID: 0 (0x0000) 00:22:07.950 Controller ID: 65535 (0xffff) 00:22:07.950 Admin Max SQ Size: 128 00:22:07.950 Transport Service Identifier: 4420 00:22:07.951 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:07.951 Transport Address: 10.0.0.2 00:22:07.951 Discovery Log Entry 1 00:22:07.951 ---------------------- 00:22:07.951 Transport Type: 3 (TCP) 00:22:07.951 Address Family: 1 (IPv4) 00:22:07.951 Subsystem Type: 2 (NVM Subsystem) 00:22:07.951 Entry Flags: 00:22:07.951 Duplicate Returned Information: 0 00:22:07.951 Explicit Persistent Connection Support for Discovery: 0 00:22:07.951 Transport Requirements: 00:22:07.951 Secure Channel: Not Required 00:22:07.951 Port ID: 0 (0x0000) 00:22:07.951 Controller ID: 65535 (0xffff) 00:22:07.951 Admin Max SQ Size: 128 00:22:07.951 Transport Service Identifier: 4420 00:22:07.951 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:07.951 Transport Address: 10.0.0.2 [2024-04-24 21:27:22.820695] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:07.951 [2024-04-24 21:27:22.820713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.951 [2024-04-24 21:27:22.820722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.951 [2024-04-24 21:27:22.820728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.951 [2024-04-24 21:27:22.820735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.951 [2024-04-24 21:27:22.820749] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.820754] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.820760] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.951 [2024-04-24 21:27:22.820772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.951 [2024-04-24 21:27:22.820792] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.951 [2024-04-24 21:27:22.820921] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.951 [2024-04-24 21:27:22.820928] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.951 [2024-04-24 21:27:22.820935] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.820940] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.951 [2024-04-24 21:27:22.820951] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.820956] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.820961] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.951 [2024-04-24 21:27:22.820970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.951 [2024-04-24 21:27:22.820983] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.951 [2024-04-24 21:27:22.821124] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.951 [2024-04-24 21:27:22.821130] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.951 [2024-04-24 21:27:22.821134] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821139] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.951 [2024-04-24 21:27:22.821146] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:07.951 [2024-04-24 21:27:22.821152] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:07.951 [2024-04-24 21:27:22.821165] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821174] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821180] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.951 [2024-04-24 21:27:22.821189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.951 [2024-04-24 21:27:22.821199] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.951 [2024-04-24 21:27:22.821328] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.951 [2024-04-24 21:27:22.821334] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.951 [2024-04-24 21:27:22.821338] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821342] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.951 [2024-04-24 21:27:22.821352] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821357] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821361] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.951 [2024-04-24 21:27:22.821369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.951 [2024-04-24 21:27:22.821379] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.951 [2024-04-24 21:27:22.821504] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.951 [2024-04-24 21:27:22.821511] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.951 [2024-04-24 21:27:22.821514] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821519] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.951 [2024-04-24 21:27:22.821528] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821532] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821538] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.951 [2024-04-24 21:27:22.821546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.951 [2024-04-24 21:27:22.821555] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.951 [2024-04-24 21:27:22.821684] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.951 [2024-04-24 21:27:22.821690] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.951 [2024-04-24 21:27:22.821693] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821698] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.951 [2024-04-24 21:27:22.821707] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821711] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821715] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.951 [2024-04-24 21:27:22.821727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.951 [2024-04-24 21:27:22.821737] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.951 [2024-04-24 21:27:22.821860] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.951 [2024-04-24 21:27:22.821866] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.951 [2024-04-24 21:27:22.821870] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821874] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.951 [2024-04-24 21:27:22.821883] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821887] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.821891] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.951 [2024-04-24 21:27:22.821899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.951 [2024-04-24 21:27:22.821909] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.951 [2024-04-24 21:27:22.822025] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.951 [2024-04-24 21:27:22.822032] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.951 [2024-04-24 21:27:22.822035] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.822039] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.951 [2024-04-24 21:27:22.822049] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.822053] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.822057] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.951 [2024-04-24 21:27:22.822065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.951 [2024-04-24 21:27:22.822075] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.951 [2024-04-24 21:27:22.822203] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.951 [2024-04-24 21:27:22.822209] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.951 [2024-04-24 21:27:22.822213] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.822217] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.951 [2024-04-24 21:27:22.822226] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.822231] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.822237] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.951 [2024-04-24 21:27:22.822246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.951 [2024-04-24 21:27:22.822256] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.951 [2024-04-24 21:27:22.822418] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.951 [2024-04-24 21:27:22.822424] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.951 [2024-04-24 21:27:22.822428] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.822432] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.951 [2024-04-24 21:27:22.822442] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.822446] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.951 [2024-04-24 21:27:22.822451] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.952 [2024-04-24 21:27:22.822458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.952 [2024-04-24 21:27:22.822468] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.952 [2024-04-24 21:27:22.822591] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.952 [2024-04-24 21:27:22.822597] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.952 [2024-04-24 21:27:22.822601] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.822605] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.952 [2024-04-24 21:27:22.822614] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.822619] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.822623] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.952 [2024-04-24 21:27:22.822630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.952 [2024-04-24 21:27:22.822640] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.952 [2024-04-24 21:27:22.822767] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.952 [2024-04-24 21:27:22.822773] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.952 [2024-04-24 21:27:22.822777] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.822781] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.952 [2024-04-24 21:27:22.822790] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.822795] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.822799] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.952 [2024-04-24 21:27:22.822807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.952 [2024-04-24 21:27:22.822816] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.952 [2024-04-24 21:27:22.822935] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.952 [2024-04-24 21:27:22.822941] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.952 [2024-04-24 21:27:22.822945] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.822949] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.952 [2024-04-24 21:27:22.822957] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.822962] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.822967] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.952 [2024-04-24 21:27:22.822975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.952 [2024-04-24 21:27:22.822985] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.952 [2024-04-24 21:27:22.823110] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.952 [2024-04-24 21:27:22.823116] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.952 [2024-04-24 21:27:22.823120] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.823124] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.952 [2024-04-24 21:27:22.823133] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.823138] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.823142] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.952 [2024-04-24 21:27:22.823150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.952 [2024-04-24 21:27:22.823159] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.952 [2024-04-24 21:27:22.827284] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.952 [2024-04-24 21:27:22.827293] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.952 [2024-04-24 21:27:22.827297] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.827302] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.952 [2024-04-24 21:27:22.827311] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.827316] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.827320] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:07.952 [2024-04-24 21:27:22.827328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.952 [2024-04-24 21:27:22.827339] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:07.952 [2024-04-24 21:27:22.827434] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:07.952 [2024-04-24 21:27:22.827440] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:07.952 [2024-04-24 21:27:22.827444] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:07.952 [2024-04-24 21:27:22.827448] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:07.952 [2024-04-24 21:27:22.827456] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:07.952 00:22:07.952 21:27:22 -- host/identify.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:08.226 [2024-04-24 21:27:22.909725] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:22:08.226 [2024-04-24 21:27:22.909823] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287799 ] 00:22:08.226 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.226 [2024-04-24 21:27:22.967282] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:08.226 [2024-04-24 21:27:22.967370] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:08.226 [2024-04-24 21:27:22.967379] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:08.226 [2024-04-24 21:27:22.967399] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:08.226 [2024-04-24 21:27:22.967412] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:08.226 [2024-04-24 21:27:22.967612] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:08.226 [2024-04-24 21:27:22.967644] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:22:08.226 [2024-04-24 21:27:22.982279] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:08.226 [2024-04-24 21:27:22.982295] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:08.226 [2024-04-24 21:27:22.982302] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:08.226 [2024-04-24 21:27:22.982309] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:08.226 [2024-04-24 21:27:22.982349] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.226 [2024-04-24 21:27:22.982358] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.226 [2024-04-24 21:27:22.982364] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:08.226 [2024-04-24 21:27:22.982388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:08.226 [2024-04-24 21:27:22.982410] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:08.226 [2024-04-24 21:27:22.990283] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.226 [2024-04-24 21:27:22.990297] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.226 [2024-04-24 21:27:22.990301] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.226 [2024-04-24 21:27:22.990308] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:08.226 [2024-04-24 21:27:22.990322] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:08.226 [2024-04-24 21:27:22.990335] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:08.227 [2024-04-24 21:27:22.990348] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:08.227 [2024-04-24 21:27:22.990362] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.227 [2024-04-24 21:27:22.990369] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.227 [2024-04-24 21:27:22.990375] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:08.227 [2024-04-24 21:27:22.990390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.227 [2024-04-24 21:27:22.990409] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:08.227 [2024-04-24 21:27:22.990555] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.227 [2024-04-24 21:27:22.990564] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.227 [2024-04-24 21:27:22.990574] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.227 [2024-04-24 21:27:22.990580] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:08.227 [2024-04-24 21:27:22.990588] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:08.227 [2024-04-24 21:27:22.990597] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:08.227 [2024-04-24 21:27:22.990606] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.227 [2024-04-24 21:27:22.990615] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.227 [2024-04-24 21:27:22.990621] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:08.227 [2024-04-24 21:27:22.990634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.227 [2024-04-24 21:27:22.990646] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:08.227 [2024-04-24 21:27:22.990775] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.227 [2024-04-24 21:27:22.990784] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.227 [2024-04-24 21:27:22.990788] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.227 [2024-04-24 21:27:22.990792] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:08.227 [2024-04-24 21:27:22.990799] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:08.227 [2024-04-24 21:27:22.990808] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:08.227 [2024-04-24 21:27:22.990816] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.227 [2024-04-24 21:27:22.990821] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.227 [2024-04-24 21:27:22.990827] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:08.228 [2024-04-24 21:27:22.990836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.228 [2024-04-24 21:27:22.990849] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:08.228 [2024-04-24 21:27:22.990966] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.228 [2024-04-24 21:27:22.990973] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.228 [2024-04-24 21:27:22.990977] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.228 [2024-04-24 21:27:22.990981] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:08.228 [2024-04-24 21:27:22.990988] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:08.228 [2024-04-24 21:27:22.991000] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.228 [2024-04-24 21:27:22.991005] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.228 [2024-04-24 21:27:22.991011] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:08.228 [2024-04-24 21:27:22.991022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.228 [2024-04-24 21:27:22.991033] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:08.228 [2024-04-24 21:27:22.991152] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.228 [2024-04-24 21:27:22.991159] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.228 [2024-04-24 21:27:22.991163] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.228 [2024-04-24 21:27:22.991167] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:08.228 [2024-04-24 21:27:22.991175] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:08.228 [2024-04-24 21:27:22.991182] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:08.228 [2024-04-24 21:27:22.991190] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:08.228 [2024-04-24 21:27:22.991297] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:08.228 [2024-04-24 21:27:22.991305] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:08.228 [2024-04-24 21:27:22.991318] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.228 [2024-04-24 21:27:22.991323] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.228 [2024-04-24 21:27:22.991329] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:08.228 [2024-04-24 21:27:22.991338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.228 [2024-04-24 21:27:22.991349] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:08.228 [2024-04-24 21:27:22.991481] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.228 [2024-04-24 21:27:22.991488] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.228 [2024-04-24 21:27:22.991492] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.228 [2024-04-24 21:27:22.991496] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:08.228 [2024-04-24 21:27:22.991503] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:08.228 [2024-04-24 21:27:22.991513] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.228 [2024-04-24 21:27:22.991519] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.229 [2024-04-24 21:27:22.991525] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:08.229 [2024-04-24 21:27:22.991536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.229 [2024-04-24 21:27:22.991548] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:08.229 [2024-04-24 21:27:22.991670] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.229 [2024-04-24 21:27:22.991677] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.229 [2024-04-24 21:27:22.991681] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.229 [2024-04-24 21:27:22.991685] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:08.229 [2024-04-24 21:27:22.991691] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:08.229 [2024-04-24 21:27:22.991698] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:08.229 [2024-04-24 21:27:22.991706] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:08.229 [2024-04-24 21:27:22.991718] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:08.229 [2024-04-24 21:27:22.991731] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.229 [2024-04-24 21:27:22.991737] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:08.229 [2024-04-24 21:27:22.991746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.229 [2024-04-24 21:27:22.991757] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:08.229 [2024-04-24 21:27:22.991925] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.229 [2024-04-24 21:27:22.991933] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.229 [2024-04-24 21:27:22.991937] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.230 [2024-04-24 21:27:22.991943] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:22:08.230 [2024-04-24 21:27:22.991951] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:22:08.230 [2024-04-24 21:27:22.991958] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.230 [2024-04-24 21:27:22.991968] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.230 [2024-04-24 21:27:22.991974] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.230 [2024-04-24 21:27:22.992044] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.230 [2024-04-24 21:27:22.992051] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.230 [2024-04-24 21:27:22.992055] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.230 [2024-04-24 21:27:22.992060] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:08.230 [2024-04-24 21:27:22.992071] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:08.230 [2024-04-24 21:27:22.992078] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:08.230 [2024-04-24 21:27:22.992085] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:08.230 [2024-04-24 21:27:22.992093] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:08.230 [2024-04-24 21:27:22.992100] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:08.231 [2024-04-24 21:27:22.992106] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:08.231 [2024-04-24 21:27:22.992117] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:08.231 [2024-04-24 21:27:22.992126] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.231 [2024-04-24 21:27:22.992132] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.231 [2024-04-24 21:27:22.992137] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:08.231 [2024-04-24 21:27:22.992147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:08.231 [2024-04-24 21:27:22.992158] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:08.231 [2024-04-24 21:27:22.992295] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.231 [2024-04-24 21:27:22.992301] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.231 [2024-04-24 21:27:22.992306] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.231 [2024-04-24 21:27:22.992310] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:22:08.231 [2024-04-24 21:27:22.992319] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.231 [2024-04-24 21:27:22.992325] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.231 [2024-04-24 21:27:22.992330] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:22:08.231 [2024-04-24 21:27:22.992341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.231 [2024-04-24 21:27:22.992349] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.231 [2024-04-24 21:27:22.992354] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.231 [2024-04-24 21:27:22.992359] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:22:08.231 [2024-04-24 21:27:22.992366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.231 [2024-04-24 21:27:22.992372] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.231 [2024-04-24 21:27:22.992376] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.231 [2024-04-24 21:27:22.992381] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:22:08.231 [2024-04-24 21:27:22.992388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.231 [2024-04-24 21:27:22.992395] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.231 [2024-04-24 21:27:22.992399] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.231 [2024-04-24 21:27:22.992404] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.231 [2024-04-24 21:27:22.992413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.232 [2024-04-24 21:27:22.992418] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:08.232 [2024-04-24 21:27:22.992428] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:08.232 [2024-04-24 21:27:22.992435] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.232 [2024-04-24 21:27:22.992441] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:22:08.232 [2024-04-24 21:27:22.992450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.232 [2024-04-24 21:27:22.992462] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:08.232 [2024-04-24 21:27:22.992468] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:22:08.232 [2024-04-24 21:27:22.992472] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:22:08.232 [2024-04-24 21:27:22.992477] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.232 [2024-04-24 21:27:22.992482] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:08.232 [2024-04-24 21:27:22.992634] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.232 [2024-04-24 21:27:22.992640] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.232 [2024-04-24 21:27:22.992644] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.232 [2024-04-24 21:27:22.992650] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:22:08.232 [2024-04-24 21:27:22.992658] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:08.232 [2024-04-24 21:27:22.992665] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:08.232 [2024-04-24 21:27:22.992674] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:08.232 [2024-04-24 21:27:22.992682] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:08.232 [2024-04-24 21:27:22.992693] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.232 [2024-04-24 21:27:22.992701] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.232 [2024-04-24 21:27:22.992706] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:22:08.232 [2024-04-24 21:27:22.992716] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:08.236 [2024-04-24 21:27:22.992727] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:08.236 [2024-04-24 21:27:22.992848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.237 [2024-04-24 21:27:22.992854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.237 [2024-04-24 21:27:22.992858] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.237 [2024-04-24 21:27:22.992862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:22:08.237 [2024-04-24 21:27:22.992915] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:08.237 [2024-04-24 21:27:22.992927] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:08.237 [2024-04-24 21:27:22.992938] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.237 [2024-04-24 21:27:22.992945] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:22:08.237 [2024-04-24 21:27:22.992954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.237 [2024-04-24 21:27:22.992965] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:08.237 [2024-04-24 21:27:22.993100] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.237 [2024-04-24 21:27:22.993108] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.237 [2024-04-24 21:27:22.993112] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.237 [2024-04-24 21:27:22.993116] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:22:08.237 [2024-04-24 21:27:22.993122] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:22:08.237 [2024-04-24 21:27:22.993127] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.237 [2024-04-24 21:27:22.993172] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.237 [2024-04-24 21:27:22.993177] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.237 [2024-04-24 21:27:22.993265] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.237 [2024-04-24 21:27:22.993275] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.237 [2024-04-24 21:27:22.993278] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.237 [2024-04-24 21:27:22.993283] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:22:08.237 [2024-04-24 21:27:22.993300] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:08.238 [2024-04-24 21:27:22.993319] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:08.238 [2024-04-24 21:27:22.993329] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:08.238 [2024-04-24 21:27:22.993339] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.238 [2024-04-24 21:27:22.993344] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:22:08.238 [2024-04-24 21:27:22.993353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.238 [2024-04-24 21:27:22.993365] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:08.238 [2024-04-24 21:27:22.993507] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.238 [2024-04-24 21:27:22.993515] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.238 [2024-04-24 21:27:22.993519] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.238 [2024-04-24 21:27:22.993524] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:22:08.238 [2024-04-24 21:27:22.993529] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:22:08.238 [2024-04-24 21:27:22.993534] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.238 [2024-04-24 21:27:22.993578] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.238 [2024-04-24 21:27:22.993582] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.238 [2024-04-24 21:27:22.993679] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.239 [2024-04-24 21:27:22.993688] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.239 [2024-04-24 21:27:22.993692] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.239 [2024-04-24 21:27:22.993696] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:22:08.239 [2024-04-24 21:27:22.993713] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:08.239 [2024-04-24 21:27:22.993723] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:08.239 [2024-04-24 21:27:22.993732] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.239 [2024-04-24 21:27:22.993738] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:22:08.239 [2024-04-24 21:27:22.993747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.239 [2024-04-24 21:27:22.993759] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:08.239 [2024-04-24 21:27:22.993885] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.239 [2024-04-24 21:27:22.993891] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.239 [2024-04-24 21:27:22.993896] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.239 [2024-04-24 21:27:22.993900] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:22:08.239 [2024-04-24 21:27:22.993906] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:22:08.239 [2024-04-24 21:27:22.993911] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.239 [2024-04-24 21:27:22.993956] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.239 [2024-04-24 21:27:22.993961] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.239 [2024-04-24 21:27:22.994059] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.239 [2024-04-24 21:27:22.994065] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.240 [2024-04-24 21:27:22.994069] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.240 [2024-04-24 21:27:22.994074] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:22:08.240 [2024-04-24 21:27:22.994087] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:08.240 [2024-04-24 21:27:22.994095] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:08.240 [2024-04-24 21:27:22.994105] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:08.240 [2024-04-24 21:27:22.994113] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:08.240 [2024-04-24 21:27:22.994120] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:08.240 [2024-04-24 21:27:22.994127] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:08.240 [2024-04-24 21:27:22.994133] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:08.240 [2024-04-24 21:27:22.994140] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:08.240 [2024-04-24 21:27:22.994163] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.240 [2024-04-24 21:27:22.994169] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:22:08.240 [2024-04-24 21:27:22.994180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.240 [2024-04-24 21:27:22.994189] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.240 [2024-04-24 21:27:22.994195] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.240 [2024-04-24 21:27:22.994200] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:22:08.240 [2024-04-24 21:27:22.994209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.241 [2024-04-24 21:27:22.994221] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:08.241 [2024-04-24 21:27:22.994227] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:08.241 [2024-04-24 21:27:22.998276] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.241 [2024-04-24 21:27:22.998285] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.241 [2024-04-24 21:27:22.998290] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.241 [2024-04-24 21:27:22.998295] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:22:08.241 [2024-04-24 21:27:22.998305] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.241 [2024-04-24 21:27:22.998314] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.241 [2024-04-24 21:27:22.998318] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.241 [2024-04-24 21:27:22.998322] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:22:08.241 [2024-04-24 21:27:22.998332] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.242 [2024-04-24 21:27:22.998336] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:22:08.242 [2024-04-24 21:27:22.998345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.242 [2024-04-24 21:27:22.998356] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:08.242 [2024-04-24 21:27:22.998460] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.242 [2024-04-24 21:27:22.998468] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.242 [2024-04-24 21:27:22.998472] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.242 [2024-04-24 21:27:22.998476] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:22:08.242 [2024-04-24 21:27:22.998485] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.242 [2024-04-24 21:27:22.998490] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:22:08.242 [2024-04-24 21:27:22.998498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.242 [2024-04-24 21:27:22.998507] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:08.242 [2024-04-24 21:27:22.998608] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.242 [2024-04-24 21:27:22.998616] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.242 [2024-04-24 21:27:22.998620] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.242 [2024-04-24 21:27:22.998624] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:22:08.242 [2024-04-24 21:27:22.998633] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.243 [2024-04-24 21:27:22.998638] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:22:08.243 [2024-04-24 21:27:22.998646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.243 [2024-04-24 21:27:22.998655] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:08.243 [2024-04-24 21:27:22.998749] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.243 [2024-04-24 21:27:22.998755] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.243 [2024-04-24 21:27:22.998759] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.243 [2024-04-24 21:27:22.998763] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:22:08.243 [2024-04-24 21:27:22.998782] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.243 [2024-04-24 21:27:22.998787] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:22:08.243 [2024-04-24 21:27:22.998798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.243 [2024-04-24 21:27:22.998807] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.243 [2024-04-24 21:27:22.998813] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:22:08.243 [2024-04-24 21:27:22.998821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.243 [2024-04-24 21:27:22.998830] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.243 [2024-04-24 21:27:22.998835] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x614000002040) 00:22:08.243 [2024-04-24 21:27:22.998844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.243 [2024-04-24 21:27:22.998855] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.243 [2024-04-24 21:27:22.998861] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:22:08.243 [2024-04-24 21:27:22.998869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.243 [2024-04-24 21:27:22.998881] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:22:08.243 [2024-04-24 21:27:22.998889] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:22:08.244 [2024-04-24 21:27:22.998893] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:22:08.244 [2024-04-24 21:27:22.998902] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:22:08.244 [2024-04-24 21:27:22.999063] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.244 [2024-04-24 21:27:22.999070] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.244 [2024-04-24 21:27:22.999075] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.244 [2024-04-24 21:27:22.999080] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8192, cccid=5 00:22:08.244 [2024-04-24 21:27:22.999086] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x614000002040): expected_datao=0, payload_size=8192 00:22:08.244 [2024-04-24 21:27:22.999092] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.244 [2024-04-24 21:27:22.999146] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.244 [2024-04-24 21:27:22.999152] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.244 [2024-04-24 21:27:22.999159] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.244 [2024-04-24 21:27:22.999167] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.244 [2024-04-24 21:27:22.999171] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.244 [2024-04-24 21:27:22.999176] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=4 00:22:08.244 [2024-04-24 21:27:22.999181] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:22:08.244 [2024-04-24 21:27:22.999186] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.244 [2024-04-24 21:27:22.999193] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.244 [2024-04-24 21:27:22.999197] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.244 [2024-04-24 21:27:22.999205] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.244 [2024-04-24 21:27:22.999211] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.244 [2024-04-24 21:27:22.999215] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.244 [2024-04-24 21:27:22.999219] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=6 00:22:08.245 [2024-04-24 21:27:22.999226] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:22:08.245 [2024-04-24 21:27:22.999231] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.245 [2024-04-24 21:27:22.999237] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.245 [2024-04-24 21:27:22.999241] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.245 [2024-04-24 21:27:22.999247] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:08.245 [2024-04-24 21:27:22.999254] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:08.245 [2024-04-24 21:27:22.999258] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:08.245 [2024-04-24 21:27:22.999262] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=7 00:22:08.245 [2024-04-24 21:27:22.999270] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:22:08.245 [2024-04-24 21:27:22.999275] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.245 [2024-04-24 21:27:22.999282] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:08.245 [2024-04-24 21:27:22.999286] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:08.245 [2024-04-24 21:27:22.999294] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.245 [2024-04-24 21:27:22.999300] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.245 [2024-04-24 21:27:22.999304] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.246 [2024-04-24 21:27:22.999310] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:22:08.246 [2024-04-24 21:27:22.999327] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.246 [2024-04-24 21:27:22.999333] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.246 [2024-04-24 21:27:22.999337] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.246 [2024-04-24 21:27:22.999341] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:22:08.246 [2024-04-24 21:27:22.999353] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.246 [2024-04-24 21:27:22.999359] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.246 [2024-04-24 21:27:22.999363] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.246 [2024-04-24 21:27:22.999367] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x614000002040 00:22:08.246 [2024-04-24 21:27:22.999377] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.246 [2024-04-24 21:27:22.999383] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.246 [2024-04-24 21:27:22.999387] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.246 [2024-04-24 21:27:22.999391] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:22:08.246 ===================================================== 00:22:08.246 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:08.246 ===================================================== 00:22:08.246 Controller Capabilities/Features 00:22:08.246 ================================ 00:22:08.246 Vendor ID: 8086 00:22:08.246 Subsystem Vendor ID: 8086 00:22:08.246 Serial Number: SPDK00000000000001 00:22:08.246 Model Number: SPDK bdev Controller 00:22:08.246 Firmware Version: 24.05 00:22:08.246 Recommended Arb Burst: 6 00:22:08.247 IEEE OUI Identifier: e4 d2 5c 00:22:08.247 Multi-path I/O 00:22:08.247 May have multiple subsystem ports: Yes 00:22:08.247 May have multiple controllers: Yes 00:22:08.247 Associated with SR-IOV VF: No 00:22:08.247 Max Data Transfer Size: 131072 00:22:08.247 Max Number of Namespaces: 32 00:22:08.247 Max Number of I/O Queues: 127 00:22:08.247 NVMe Specification Version (VS): 1.3 00:22:08.247 NVMe Specification Version (Identify): 1.3 00:22:08.247 Maximum Queue Entries: 128 00:22:08.247 Contiguous Queues Required: Yes 00:22:08.247 Arbitration Mechanisms Supported 00:22:08.247 Weighted Round Robin: Not Supported 00:22:08.247 Vendor Specific: Not Supported 00:22:08.247 Reset Timeout: 15000 ms 00:22:08.247 Doorbell Stride: 4 bytes 00:22:08.247 NVM Subsystem Reset: Not Supported 00:22:08.247 Command Sets Supported 00:22:08.247 NVM Command Set: Supported 00:22:08.247 Boot Partition: Not Supported 00:22:08.247 Memory Page Size Minimum: 4096 bytes 00:22:08.247 Memory Page Size Maximum: 4096 bytes 00:22:08.247 Persistent Memory Region: Not Supported 00:22:08.247 Optional Asynchronous Events Supported 00:22:08.247 Namespace Attribute Notices: Supported 00:22:08.247 Firmware Activation Notices: Not Supported 00:22:08.247 ANA Change Notices: Not Supported 00:22:08.247 PLE Aggregate Log Change Notices: Not Supported 00:22:08.247 LBA Status Info Alert Notices: Not Supported 00:22:08.247 EGE Aggregate Log Change Notices: Not Supported 00:22:08.247 Normal NVM Subsystem Shutdown event: Not Supported 00:22:08.247 Zone Descriptor Change Notices: Not Supported 00:22:08.251 Discovery Log Change Notices: Not Supported 00:22:08.251 Controller Attributes 00:22:08.251 128-bit Host Identifier: Supported 00:22:08.251 Non-Operational Permissive Mode: Not Supported 00:22:08.252 NVM Sets: Not Supported 00:22:08.252 Read Recovery Levels: Not Supported 00:22:08.252 Endurance Groups: Not Supported 00:22:08.252 Predictable Latency Mode: Not Supported 00:22:08.252 Traffic Based Keep ALive: Not Supported 00:22:08.252 Namespace Granularity: Not Supported 00:22:08.252 SQ Associations: Not Supported 00:22:08.252 UUID List: Not Supported 00:22:08.252 Multi-Domain Subsystem: Not Supported 00:22:08.252 Fixed Capacity Management: Not Supported 00:22:08.252 Variable Capacity Management: Not Supported 00:22:08.252 Delete Endurance Group: Not Supported 00:22:08.252 Delete NVM Set: Not Supported 00:22:08.252 Extended LBA Formats Supported: Not Supported 00:22:08.252 Flexible Data Placement Supported: Not Supported 00:22:08.252 00:22:08.252 Controller Memory Buffer Support 00:22:08.252 ================================ 00:22:08.252 Supported: No 00:22:08.252 00:22:08.252 Persistent Memory Region Support 00:22:08.252 ================================ 00:22:08.252 Supported: No 00:22:08.252 00:22:08.252 Admin Command Set Attributes 00:22:08.252 ============================ 00:22:08.252 Security Send/Receive: Not Supported 00:22:08.252 Format NVM: Not Supported 00:22:08.252 Firmware Activate/Download: Not Supported 00:22:08.252 Namespace Management: Not Supported 00:22:08.252 Device Self-Test: Not Supported 00:22:08.252 Directives: Not Supported 00:22:08.252 NVMe-MI: Not Supported 00:22:08.252 Virtualization Management: Not Supported 00:22:08.252 Doorbell Buffer Config: Not Supported 00:22:08.252 Get LBA Status Capability: Not Supported 00:22:08.252 Command & Feature Lockdown Capability: Not Supported 00:22:08.252 Abort Command Limit: 4 00:22:08.252 Async Event Request Limit: 4 00:22:08.252 Number of Firmware Slots: N/A 00:22:08.252 Firmware Slot 1 Read-Only: N/A 00:22:08.252 Firmware Activation Without Reset: N/A 00:22:08.252 Multiple Update Detection Support: N/A 00:22:08.252 Firmware Update Granularity: No Information Provided 00:22:08.252 Per-Namespace SMART Log: No 00:22:08.252 Asymmetric Namespace Access Log Page: Not Supported 00:22:08.252 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:08.252 Command Effects Log Page: Supported 00:22:08.252 Get Log Page Extended Data: Supported 00:22:08.252 Telemetry Log Pages: Not Supported 00:22:08.252 Persistent Event Log Pages: Not Supported 00:22:08.252 Supported Log Pages Log Page: May Support 00:22:08.252 Commands Supported & Effects Log Page: Not Supported 00:22:08.252 Feature Identifiers & Effects Log Page:May Support 00:22:08.252 NVMe-MI Commands & Effects Log Page: May Support 00:22:08.252 Data Area 4 for Telemetry Log: Not Supported 00:22:08.252 Error Log Page Entries Supported: 128 00:22:08.252 Keep Alive: Supported 00:22:08.252 Keep Alive Granularity: 10000 ms 00:22:08.252 00:22:08.252 NVM Command Set Attributes 00:22:08.252 ========================== 00:22:08.252 Submission Queue Entry Size 00:22:08.252 Max: 64 00:22:08.252 Min: 64 00:22:08.252 Completion Queue Entry Size 00:22:08.252 Max: 16 00:22:08.252 Min: 16 00:22:08.252 Number of Namespaces: 32 00:22:08.252 Compare Command: Supported 00:22:08.252 Write Uncorrectable Command: Not Supported 00:22:08.252 Dataset Management Command: Supported 00:22:08.252 Write Zeroes Command: Supported 00:22:08.252 Set Features Save Field: Not Supported 00:22:08.252 Reservations: Supported 00:22:08.252 Timestamp: Not Supported 00:22:08.252 Copy: Supported 00:22:08.252 Volatile Write Cache: Present 00:22:08.252 Atomic Write Unit (Normal): 1 00:22:08.252 Atomic Write Unit (PFail): 1 00:22:08.252 Atomic Compare & Write Unit: 1 00:22:08.252 Fused Compare & Write: Supported 00:22:08.252 Scatter-Gather List 00:22:08.252 SGL Command Set: Supported 00:22:08.252 SGL Keyed: Supported 00:22:08.252 SGL Bit Bucket Descriptor: Not Supported 00:22:08.252 SGL Metadata Pointer: Not Supported 00:22:08.252 Oversized SGL: Not Supported 00:22:08.252 SGL Metadata Address: Not Supported 00:22:08.252 SGL Offset: Supported 00:22:08.252 Transport SGL Data Block: Not Supported 00:22:08.252 Replay Protected Memory Block: Not Supported 00:22:08.252 00:22:08.252 Firmware Slot Information 00:22:08.252 ========================= 00:22:08.252 Active slot: 1 00:22:08.252 Slot 1 Firmware Revision: 24.05 00:22:08.252 00:22:08.252 00:22:08.252 Commands Supported and Effects 00:22:08.252 ============================== 00:22:08.252 Admin Commands 00:22:08.252 -------------- 00:22:08.252 Get Log Page (02h): Supported 00:22:08.252 Identify (06h): Supported 00:22:08.252 Abort (08h): Supported 00:22:08.252 Set Features (09h): Supported 00:22:08.252 Get Features (0Ah): Supported 00:22:08.252 Asynchronous Event Request (0Ch): Supported 00:22:08.252 Keep Alive (18h): Supported 00:22:08.252 I/O Commands 00:22:08.252 ------------ 00:22:08.252 Flush (00h): Supported LBA-Change 00:22:08.252 Write (01h): Supported LBA-Change 00:22:08.252 Read (02h): Supported 00:22:08.252 Compare (05h): Supported 00:22:08.252 Write Zeroes (08h): Supported LBA-Change 00:22:08.252 Dataset Management (09h): Supported LBA-Change 00:22:08.252 Copy (19h): Supported LBA-Change 00:22:08.252 Unknown (79h): Supported LBA-Change 00:22:08.252 Unknown (7Ah): Supported 00:22:08.252 00:22:08.252 Error Log 00:22:08.252 ========= 00:22:08.252 00:22:08.252 Arbitration 00:22:08.252 =========== 00:22:08.252 Arbitration Burst: 1 00:22:08.252 00:22:08.252 Power Management 00:22:08.252 ================ 00:22:08.252 Number of Power States: 1 00:22:08.252 Current Power State: Power State #0 00:22:08.252 Power State #0: 00:22:08.252 Max Power: 0.00 W 00:22:08.252 Non-Operational State: Operational 00:22:08.252 Entry Latency: Not Reported 00:22:08.252 Exit Latency: Not Reported 00:22:08.252 Relative Read Throughput: 0 00:22:08.252 Relative Read Latency: 0 00:22:08.252 Relative Write Throughput: 0 00:22:08.252 Relative Write Latency: 0 00:22:08.252 Idle Power: Not Reported 00:22:08.252 Active Power: Not Reported 00:22:08.252 Non-Operational Permissive Mode: Not Supported 00:22:08.252 00:22:08.252 Health Information 00:22:08.252 ================== 00:22:08.252 Critical Warnings: 00:22:08.252 Available Spare Space: OK 00:22:08.252 Temperature: OK 00:22:08.252 Device Reliability: OK 00:22:08.252 Read Only: No 00:22:08.252 Volatile Memory Backup: OK 00:22:08.252 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:08.252 Temperature Threshold: [2024-04-24 21:27:22.999527] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.252 [2024-04-24 21:27:22.999533] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:22:08.252 [2024-04-24 21:27:22.999545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.252 [2024-04-24 21:27:22.999558] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:22:08.252 [2024-04-24 21:27:22.999660] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.252 [2024-04-24 21:27:22.999667] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.253 [2024-04-24 21:27:22.999672] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:22.999677] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:22:08.253 [2024-04-24 21:27:22.999719] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:08.253 [2024-04-24 21:27:22.999733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.253 [2024-04-24 21:27:22.999742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.253 [2024-04-24 21:27:22.999748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.253 [2024-04-24 21:27:22.999755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.253 [2024-04-24 21:27:22.999767] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:22.999772] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:22.999779] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.253 [2024-04-24 21:27:22.999790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.253 [2024-04-24 21:27:22.999803] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.253 [2024-04-24 21:27:22.999906] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.253 [2024-04-24 21:27:22.999913] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.253 [2024-04-24 21:27:22.999918] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:22.999924] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.253 [2024-04-24 21:27:22.999935] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:22.999940] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:22.999947] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.253 [2024-04-24 21:27:22.999956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.253 [2024-04-24 21:27:22.999969] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.253 [2024-04-24 21:27:23.000075] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.253 [2024-04-24 21:27:23.000082] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.253 [2024-04-24 21:27:23.000087] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:23.000091] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.253 [2024-04-24 21:27:23.000098] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:08.253 [2024-04-24 21:27:23.000105] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:08.253 [2024-04-24 21:27:23.000115] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:23.000120] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:23.000126] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.253 [2024-04-24 21:27:23.000134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.253 [2024-04-24 21:27:23.000148] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.253 [2024-04-24 21:27:23.000242] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.253 [2024-04-24 21:27:23.000251] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.253 [2024-04-24 21:27:23.000255] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:23.000259] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.253 [2024-04-24 21:27:23.000272] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:23.000277] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:23.000281] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.253 [2024-04-24 21:27:23.000289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.253 [2024-04-24 21:27:23.000299] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.253 [2024-04-24 21:27:23.000392] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.253 [2024-04-24 21:27:23.000400] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.253 [2024-04-24 21:27:23.000404] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:23.000409] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.253 [2024-04-24 21:27:23.000418] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:23.000422] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:23.000427] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.253 [2024-04-24 21:27:23.000435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.253 [2024-04-24 21:27:23.000444] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.253 [2024-04-24 21:27:23.000537] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.253 [2024-04-24 21:27:23.000545] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.253 [2024-04-24 21:27:23.000549] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.253 [2024-04-24 21:27:23.000553] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.253 [2024-04-24 21:27:23.000563] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.000567] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.000571] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.254 [2024-04-24 21:27:23.000580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.254 [2024-04-24 21:27:23.000589] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.254 [2024-04-24 21:27:23.000684] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.254 [2024-04-24 21:27:23.000692] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.254 [2024-04-24 21:27:23.000695] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.000700] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.254 [2024-04-24 21:27:23.000709] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.000713] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.000718] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.254 [2024-04-24 21:27:23.000726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.254 [2024-04-24 21:27:23.000736] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.254 [2024-04-24 21:27:23.000831] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.254 [2024-04-24 21:27:23.000840] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.254 [2024-04-24 21:27:23.000844] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.000848] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.254 [2024-04-24 21:27:23.000862] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.000867] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.000871] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.254 [2024-04-24 21:27:23.000881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.254 [2024-04-24 21:27:23.000891] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.254 [2024-04-24 21:27:23.000977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.254 [2024-04-24 21:27:23.000984] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.254 [2024-04-24 21:27:23.000989] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.000994] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.254 [2024-04-24 21:27:23.001003] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001007] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001012] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.254 [2024-04-24 21:27:23.001020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.254 [2024-04-24 21:27:23.001029] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.254 [2024-04-24 21:27:23.001124] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.254 [2024-04-24 21:27:23.001130] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.254 [2024-04-24 21:27:23.001136] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001141] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.254 [2024-04-24 21:27:23.001150] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001154] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001159] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.254 [2024-04-24 21:27:23.001167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.254 [2024-04-24 21:27:23.001177] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.254 [2024-04-24 21:27:23.001271] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.254 [2024-04-24 21:27:23.001278] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.254 [2024-04-24 21:27:23.001282] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001288] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.254 [2024-04-24 21:27:23.001297] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001301] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001306] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.254 [2024-04-24 21:27:23.001314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.254 [2024-04-24 21:27:23.001325] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.254 [2024-04-24 21:27:23.001417] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.254 [2024-04-24 21:27:23.001424] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.254 [2024-04-24 21:27:23.001428] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001435] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.254 [2024-04-24 21:27:23.001445] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001450] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001454] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.254 [2024-04-24 21:27:23.001462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.254 [2024-04-24 21:27:23.001472] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.254 [2024-04-24 21:27:23.001559] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.254 [2024-04-24 21:27:23.001565] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.254 [2024-04-24 21:27:23.001569] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001573] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.254 [2024-04-24 21:27:23.001584] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001589] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001594] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.254 [2024-04-24 21:27:23.001602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.254 [2024-04-24 21:27:23.001611] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.254 [2024-04-24 21:27:23.001705] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.254 [2024-04-24 21:27:23.001711] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.254 [2024-04-24 21:27:23.001715] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001720] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.254 [2024-04-24 21:27:23.001731] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001736] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001740] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.254 [2024-04-24 21:27:23.001748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.254 [2024-04-24 21:27:23.001758] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.254 [2024-04-24 21:27:23.001851] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.254 [2024-04-24 21:27:23.001858] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.254 [2024-04-24 21:27:23.001862] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.254 [2024-04-24 21:27:23.001877] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001882] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001886] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.254 [2024-04-24 21:27:23.001894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.254 [2024-04-24 21:27:23.001905] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.254 [2024-04-24 21:27:23.001984] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.254 [2024-04-24 21:27:23.001991] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.254 [2024-04-24 21:27:23.001995] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.001999] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.254 [2024-04-24 21:27:23.002008] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.002013] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.002017] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.254 [2024-04-24 21:27:23.002030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.254 [2024-04-24 21:27:23.002039] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.254 [2024-04-24 21:27:23.002123] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.254 [2024-04-24 21:27:23.002129] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.254 [2024-04-24 21:27:23.002133] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.002138] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.254 [2024-04-24 21:27:23.002147] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.002152] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.254 [2024-04-24 21:27:23.002156] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.254 [2024-04-24 21:27:23.002164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.254 [2024-04-24 21:27:23.002174] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.255 [2024-04-24 21:27:23.002258] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.255 [2024-04-24 21:27:23.002265] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.255 [2024-04-24 21:27:23.006276] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.255 [2024-04-24 21:27:23.006282] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.255 [2024-04-24 21:27:23.006292] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:08.255 [2024-04-24 21:27:23.006297] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:08.255 [2024-04-24 21:27:23.006301] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:22:08.255 [2024-04-24 21:27:23.006310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.255 [2024-04-24 21:27:23.006321] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:22:08.255 [2024-04-24 21:27:23.006418] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:08.255 [2024-04-24 21:27:23.006425] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:08.255 [2024-04-24 21:27:23.006428] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:08.255 [2024-04-24 21:27:23.006433] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:22:08.255 [2024-04-24 21:27:23.006440] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:08.255 0 Kelvin (-273 Celsius) 00:22:08.255 Available Spare: 0% 00:22:08.255 Available Spare Threshold: 0% 00:22:08.255 Life Percentage Used: 0% 00:22:08.255 Data Units Read: 0 00:22:08.255 Data Units Written: 0 00:22:08.255 Host Read Commands: 0 00:22:08.255 Host Write Commands: 0 00:22:08.255 Controller Busy Time: 0 minutes 00:22:08.255 Power Cycles: 0 00:22:08.255 Power On Hours: 0 hours 00:22:08.255 Unsafe Shutdowns: 0 00:22:08.255 Unrecoverable Media Errors: 0 00:22:08.255 Lifetime Error Log Entries: 0 00:22:08.255 Warning Temperature Time: 0 minutes 00:22:08.255 Critical Temperature Time: 0 minutes 00:22:08.255 00:22:08.255 Number of Queues 00:22:08.255 ================ 00:22:08.255 Number of I/O Submission Queues: 127 00:22:08.255 Number of I/O Completion Queues: 127 00:22:08.255 00:22:08.255 Active Namespaces 00:22:08.255 ================= 00:22:08.255 Namespace ID:1 00:22:08.255 Error Recovery Timeout: Unlimited 00:22:08.255 Command Set Identifier: NVM (00h) 00:22:08.255 Deallocate: Supported 00:22:08.255 Deallocated/Unwritten Error: Not Supported 00:22:08.255 Deallocated Read Value: Unknown 00:22:08.255 Deallocate in Write Zeroes: Not Supported 00:22:08.255 Deallocated Guard Field: 0xFFFF 00:22:08.255 Flush: Supported 00:22:08.255 Reservation: Supported 00:22:08.255 Namespace Sharing Capabilities: Multiple Controllers 00:22:08.255 Size (in LBAs): 131072 (0GiB) 00:22:08.255 Capacity (in LBAs): 131072 (0GiB) 00:22:08.255 Utilization (in LBAs): 131072 (0GiB) 00:22:08.255 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:08.255 EUI64: ABCDEF0123456789 00:22:08.255 UUID: 1d2d653f-3ce2-460c-ab11-8a9c030187e0 00:22:08.255 Thin Provisioning: Not Supported 00:22:08.255 Per-NS Atomic Units: Yes 00:22:08.255 Atomic Boundary Size (Normal): 0 00:22:08.255 Atomic Boundary Size (PFail): 0 00:22:08.255 Atomic Boundary Offset: 0 00:22:08.255 Maximum Single Source Range Length: 65535 00:22:08.255 Maximum Copy Length: 65535 00:22:08.255 Maximum Source Range Count: 1 00:22:08.255 NGUID/EUI64 Never Reused: No 00:22:08.255 Namespace Write Protected: No 00:22:08.255 Number of LBA Formats: 1 00:22:08.255 Current LBA Format: LBA Format #00 00:22:08.255 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:08.255 00:22:08.255 21:27:23 -- host/identify.sh@51 -- # sync 00:22:08.255 21:27:23 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:08.255 21:27:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.255 21:27:23 -- common/autotest_common.sh@10 -- # set +x 00:22:08.255 21:27:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.255 21:27:23 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:08.255 21:27:23 -- host/identify.sh@56 -- # nvmftestfini 00:22:08.255 21:27:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:08.255 21:27:23 -- nvmf/common.sh@117 -- # sync 00:22:08.255 21:27:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:08.255 21:27:23 -- nvmf/common.sh@120 -- # set +e 00:22:08.255 21:27:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:08.255 21:27:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:08.255 rmmod nvme_tcp 00:22:08.255 rmmod nvme_fabrics 00:22:08.255 rmmod nvme_keyring 00:22:08.255 21:27:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:08.255 21:27:23 -- nvmf/common.sh@124 -- # set -e 00:22:08.255 21:27:23 -- nvmf/common.sh@125 -- # return 0 00:22:08.255 21:27:23 -- nvmf/common.sh@478 -- # '[' -n 1287485 ']' 00:22:08.255 21:27:23 -- nvmf/common.sh@479 -- # killprocess 1287485 00:22:08.255 21:27:23 -- common/autotest_common.sh@936 -- # '[' -z 1287485 ']' 00:22:08.255 21:27:23 -- common/autotest_common.sh@940 -- # kill -0 1287485 00:22:08.255 21:27:23 -- common/autotest_common.sh@941 -- # uname 00:22:08.255 21:27:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:08.255 21:27:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1287485 00:22:08.255 21:27:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:08.255 21:27:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:08.255 21:27:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1287485' 00:22:08.255 killing process with pid 1287485 00:22:08.255 21:27:23 -- common/autotest_common.sh@955 -- # kill 1287485 00:22:08.255 [2024-04-24 21:27:23.176090] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:08.255 21:27:23 -- common/autotest_common.sh@960 -- # wait 1287485 00:22:08.823 21:27:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:08.823 21:27:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:08.823 21:27:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:08.823 21:27:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:08.823 21:27:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:08.823 21:27:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.823 21:27:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.823 21:27:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.849 21:27:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:10.849 00:22:10.849 real 0m9.734s 00:22:10.849 user 0m8.187s 00:22:10.849 sys 0m4.615s 00:22:10.849 21:27:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:10.849 21:27:25 -- common/autotest_common.sh@10 -- # set +x 00:22:10.849 ************************************ 00:22:10.849 END TEST nvmf_identify 00:22:10.849 ************************************ 00:22:10.849 21:27:25 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:10.849 21:27:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:10.849 21:27:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:10.849 21:27:25 -- common/autotest_common.sh@10 -- # set +x 00:22:11.108 ************************************ 00:22:11.108 START TEST nvmf_perf 00:22:11.108 ************************************ 00:22:11.108 21:27:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:11.108 * Looking for test storage... 00:22:11.108 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:11.108 21:27:25 -- host/perf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.108 21:27:25 -- nvmf/common.sh@7 -- # uname -s 00:22:11.108 21:27:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.108 21:27:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.108 21:27:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.108 21:27:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.108 21:27:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.108 21:27:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.108 21:27:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.108 21:27:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.108 21:27:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.108 21:27:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.108 21:27:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:22:11.108 21:27:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:22:11.108 21:27:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.108 21:27:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.108 21:27:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:11.108 21:27:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.108 21:27:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:11.108 21:27:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.108 21:27:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.108 21:27:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.108 21:27:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.108 21:27:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.108 21:27:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.108 21:27:25 -- paths/export.sh@5 -- # export PATH 00:22:11.108 21:27:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.108 21:27:25 -- nvmf/common.sh@47 -- # : 0 00:22:11.108 21:27:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.108 21:27:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.108 21:27:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.108 21:27:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.108 21:27:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.108 21:27:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.108 21:27:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.108 21:27:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.108 21:27:25 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:11.108 21:27:25 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:11.108 21:27:25 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:11.108 21:27:25 -- host/perf.sh@17 -- # nvmftestinit 00:22:11.108 21:27:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:11.108 21:27:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.108 21:27:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:11.108 21:27:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:11.108 21:27:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:11.108 21:27:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.108 21:27:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.108 21:27:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.108 21:27:25 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:22:11.108 21:27:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:11.108 21:27:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:11.108 21:27:25 -- common/autotest_common.sh@10 -- # set +x 00:22:17.689 21:27:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:17.689 21:27:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.689 21:27:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.689 21:27:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.689 21:27:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.689 21:27:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.689 21:27:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.689 21:27:31 -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.689 21:27:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.689 21:27:31 -- nvmf/common.sh@296 -- # e810=() 00:22:17.689 21:27:31 -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.689 21:27:31 -- nvmf/common.sh@297 -- # x722=() 00:22:17.689 21:27:31 -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.689 21:27:31 -- nvmf/common.sh@298 -- # mlx=() 00:22:17.689 21:27:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.689 21:27:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.689 21:27:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.689 21:27:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.689 21:27:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.689 21:27:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.689 21:27:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.689 21:27:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.689 21:27:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.689 21:27:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.689 21:27:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.689 21:27:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.689 21:27:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.689 21:27:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.689 21:27:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.689 21:27:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:17.689 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:17.689 21:27:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.689 21:27:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:17.689 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:17.689 21:27:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.689 21:27:31 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.689 21:27:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.689 21:27:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:17.689 21:27:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.689 21:27:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:17.689 Found net devices under 0000:27:00.0: cvl_0_0 00:22:17.689 21:27:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.689 21:27:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.689 21:27:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.689 21:27:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:17.689 21:27:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.689 21:27:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:17.689 Found net devices under 0000:27:00.1: cvl_0_1 00:22:17.689 21:27:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.689 21:27:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:17.689 21:27:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:17.689 21:27:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:17.689 21:27:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:17.689 21:27:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.689 21:27:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.690 21:27:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.690 21:27:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:17.690 21:27:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.690 21:27:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.690 21:27:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:17.690 21:27:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.690 21:27:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.690 21:27:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:17.690 21:27:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:17.690 21:27:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.690 21:27:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.690 21:27:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.690 21:27:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.690 21:27:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.690 21:27:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.690 21:27:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.690 21:27:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.690 21:27:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:22:17.690 00:22:17.690 --- 10.0.0.2 ping statistics --- 00:22:17.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.690 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:22:17.690 21:27:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:22:17.690 00:22:17.690 --- 10.0.0.1 ping statistics --- 00:22:17.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.690 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:22:17.690 21:27:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.690 21:27:31 -- nvmf/common.sh@411 -- # return 0 00:22:17.690 21:27:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:17.690 21:27:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.690 21:27:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:17.690 21:27:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:17.690 21:27:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.690 21:27:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:17.690 21:27:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:17.690 21:27:31 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:17.690 21:27:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:17.690 21:27:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:17.690 21:27:31 -- common/autotest_common.sh@10 -- # set +x 00:22:17.690 21:27:31 -- nvmf/common.sh@470 -- # nvmfpid=1291993 00:22:17.690 21:27:31 -- nvmf/common.sh@471 -- # waitforlisten 1291993 00:22:17.690 21:27:31 -- common/autotest_common.sh@817 -- # '[' -z 1291993 ']' 00:22:17.690 21:27:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.690 21:27:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:17.690 21:27:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.690 21:27:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:17.690 21:27:31 -- common/autotest_common.sh@10 -- # set +x 00:22:17.690 21:27:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:17.690 [2024-04-24 21:27:31.900308] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:22:17.690 [2024-04-24 21:27:31.900440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.690 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.690 [2024-04-24 21:27:32.039086] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.690 [2024-04-24 21:27:32.133993] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.690 [2024-04-24 21:27:32.134039] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.690 [2024-04-24 21:27:32.134052] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.690 [2024-04-24 21:27:32.134061] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.690 [2024-04-24 21:27:32.134069] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.690 [2024-04-24 21:27:32.134164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.690 [2024-04-24 21:27:32.134280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.690 [2024-04-24 21:27:32.134370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.690 [2024-04-24 21:27:32.134380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.690 21:27:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:17.690 21:27:32 -- common/autotest_common.sh@850 -- # return 0 00:22:17.690 21:27:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:17.690 21:27:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:17.690 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:22:17.690 21:27:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.690 21:27:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:17.690 21:27:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:27.680 21:27:41 -- host/perf.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:27.680 21:27:41 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:27.680 21:27:41 -- host/perf.sh@30 -- # local_nvme_trid=0000:c9:00.0 00:22:27.680 21:27:41 -- host/perf.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:27.680 21:27:41 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:27.680 21:27:41 -- host/perf.sh@33 -- # '[' -n 0000:c9:00.0 ']' 00:22:27.680 21:27:41 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:27.680 21:27:41 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:27.680 21:27:41 -- host/perf.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:27.680 [2024-04-24 21:27:41.734526] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.680 21:27:41 -- host/perf.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:27.680 21:27:41 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:27.680 21:27:41 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:27.680 21:27:42 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:27.680 21:27:42 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:27.680 21:27:42 -- host/perf.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.680 [2024-04-24 21:27:42.341735] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.680 21:27:42 -- host/perf.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:27.680 21:27:42 -- host/perf.sh@52 -- # '[' -n 0000:c9:00.0 ']' 00:22:27.680 21:27:42 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:c9:00.0' 00:22:27.680 21:27:42 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:27.680 21:27:42 -- host/perf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:c9:00.0' 00:22:29.059 Initializing NVMe Controllers 00:22:29.059 Attached to NVMe Controller at 0000:c9:00.0 [8086:0a54] 00:22:29.059 Associating PCIE (0000:c9:00.0) NSID 1 with lcore 0 00:22:29.059 Initialization complete. Launching workers. 00:22:29.059 ======================================================== 00:22:29.059 Latency(us) 00:22:29.059 Device Information : IOPS MiB/s Average min max 00:22:29.059 PCIE (0000:c9:00.0) NSID 1 from core 0: 97110.24 379.34 329.00 20.12 5744.13 00:22:29.059 ======================================================== 00:22:29.059 Total : 97110.24 379.34 329.00 20.12 5744.13 00:22:29.059 00:22:29.059 21:27:43 -- host/perf.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:29.059 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.967 Initializing NVMe Controllers 00:22:30.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:30.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:30.967 Initialization complete. Launching workers. 00:22:30.967 ======================================================== 00:22:30.967 Latency(us) 00:22:30.967 Device Information : IOPS MiB/s Average min max 00:22:30.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.00 0.31 12674.37 152.45 45927.50 00:22:30.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.00 0.21 19281.52 7976.68 48002.30 00:22:30.967 ======================================================== 00:22:30.967 Total : 133.00 0.52 15307.29 152.45 48002.30 00:22:30.967 00:22:30.967 21:27:45 -- host/perf.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:30.967 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.908 Initializing NVMe Controllers 00:22:31.908 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:31.908 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:31.908 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:31.908 Initialization complete. Launching workers. 00:22:31.908 ======================================================== 00:22:31.908 Latency(us) 00:22:31.908 Device Information : IOPS MiB/s Average min max 00:22:31.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11001.92 42.98 2910.45 320.48 6546.88 00:22:31.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3990.61 15.59 8072.26 6918.52 15968.13 00:22:31.909 ======================================================== 00:22:31.909 Total : 14992.53 58.56 4284.38 320.48 15968.13 00:22:31.909 00:22:31.909 21:27:46 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:22:31.909 21:27:46 -- host/perf.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:31.909 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.444 Initializing NVMe Controllers 00:22:34.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.444 Controller IO queue size 128, less than required. 00:22:34.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.444 Controller IO queue size 128, less than required. 00:22:34.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:34.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:34.444 Initialization complete. Launching workers. 00:22:34.444 ======================================================== 00:22:34.444 Latency(us) 00:22:34.444 Device Information : IOPS MiB/s Average min max 00:22:34.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1339.55 334.89 97926.81 50307.76 157905.07 00:22:34.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 580.80 145.20 226652.81 88769.20 336039.06 00:22:34.444 ======================================================== 00:22:34.444 Total : 1920.35 480.09 136859.55 50307.76 336039.06 00:22:34.444 00:22:34.444 21:27:49 -- host/perf.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:34.444 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.016 No valid NVMe controllers or AIO or URING devices found 00:22:35.016 Initializing NVMe Controllers 00:22:35.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:35.016 Controller IO queue size 128, less than required. 00:22:35.016 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.016 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:35.016 Controller IO queue size 128, less than required. 00:22:35.016 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.016 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:35.016 WARNING: Some requested NVMe devices were skipped 00:22:35.016 21:27:49 -- host/perf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:35.016 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.550 Initializing NVMe Controllers 00:22:37.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:37.550 Controller IO queue size 128, less than required. 00:22:37.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.550 Controller IO queue size 128, less than required. 00:22:37.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:37.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:37.550 Initialization complete. Launching workers. 00:22:37.550 00:22:37.550 ==================== 00:22:37.550 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:37.550 TCP transport: 00:22:37.550 polls: 36610 00:22:37.550 idle_polls: 10523 00:22:37.550 sock_completions: 26087 00:22:37.550 nvme_completions: 5239 00:22:37.550 submitted_requests: 7858 00:22:37.550 queued_requests: 1 00:22:37.550 00:22:37.550 ==================== 00:22:37.550 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:37.550 TCP transport: 00:22:37.550 polls: 38254 00:22:37.550 idle_polls: 11878 00:22:37.550 sock_completions: 26376 00:22:37.550 nvme_completions: 5429 00:22:37.551 submitted_requests: 8254 00:22:37.551 queued_requests: 1 00:22:37.551 ======================================================== 00:22:37.551 Latency(us) 00:22:37.551 Device Information : IOPS MiB/s Average min max 00:22:37.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1308.93 327.23 99735.58 49375.14 206367.12 00:22:37.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1356.41 339.10 96347.74 45341.61 192656.69 00:22:37.551 ======================================================== 00:22:37.551 Total : 2665.34 666.34 98011.48 45341.61 206367.12 00:22:37.551 00:22:37.551 21:27:52 -- host/perf.sh@66 -- # sync 00:22:37.551 21:27:52 -- host/perf.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:37.551 21:27:52 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:37.551 21:27:52 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:37.551 21:27:52 -- host/perf.sh@114 -- # nvmftestfini 00:22:37.551 21:27:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:37.551 21:27:52 -- nvmf/common.sh@117 -- # sync 00:22:37.809 21:27:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:37.809 21:27:52 -- nvmf/common.sh@120 -- # set +e 00:22:37.809 21:27:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:37.809 21:27:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:37.809 rmmod nvme_tcp 00:22:37.809 rmmod nvme_fabrics 00:22:37.809 rmmod nvme_keyring 00:22:37.809 21:27:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:37.809 21:27:52 -- nvmf/common.sh@124 -- # set -e 00:22:37.809 21:27:52 -- nvmf/common.sh@125 -- # return 0 00:22:37.809 21:27:52 -- nvmf/common.sh@478 -- # '[' -n 1291993 ']' 00:22:37.809 21:27:52 -- nvmf/common.sh@479 -- # killprocess 1291993 00:22:37.809 21:27:52 -- common/autotest_common.sh@936 -- # '[' -z 1291993 ']' 00:22:37.809 21:27:52 -- common/autotest_common.sh@940 -- # kill -0 1291993 00:22:37.809 21:27:52 -- common/autotest_common.sh@941 -- # uname 00:22:37.809 21:27:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:37.809 21:27:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1291993 00:22:37.809 21:27:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:37.809 21:27:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:37.809 21:27:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1291993' 00:22:37.809 killing process with pid 1291993 00:22:37.809 21:27:52 -- common/autotest_common.sh@955 -- # kill 1291993 00:22:37.809 21:27:52 -- common/autotest_common.sh@960 -- # wait 1291993 00:22:41.098 21:27:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:41.098 21:27:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:41.098 21:27:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:41.098 21:27:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.098 21:27:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.098 21:27:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.099 21:27:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.099 21:27:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.631 21:27:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:43.631 00:22:43.631 real 0m32.223s 00:22:43.631 user 1m34.511s 00:22:43.631 sys 0m6.830s 00:22:43.631 21:27:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:43.631 21:27:58 -- common/autotest_common.sh@10 -- # set +x 00:22:43.631 ************************************ 00:22:43.631 END TEST nvmf_perf 00:22:43.631 ************************************ 00:22:43.632 21:27:58 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:43.632 21:27:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:43.632 21:27:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:43.632 21:27:58 -- common/autotest_common.sh@10 -- # set +x 00:22:43.632 ************************************ 00:22:43.632 START TEST nvmf_fio_host 00:22:43.632 ************************************ 00:22:43.632 21:27:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:43.632 * Looking for test storage... 00:22:43.632 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:43.632 21:27:58 -- host/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:43.632 21:27:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.632 21:27:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.632 21:27:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.632 21:27:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.632 21:27:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.632 21:27:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.632 21:27:58 -- paths/export.sh@5 -- # export PATH 00:22:43.632 21:27:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.632 21:27:58 -- host/fio.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.632 21:27:58 -- nvmf/common.sh@7 -- # uname -s 00:22:43.632 21:27:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.632 21:27:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.632 21:27:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.632 21:27:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.632 21:27:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.632 21:27:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.632 21:27:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.632 21:27:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.632 21:27:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.632 21:27:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.632 21:27:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:22:43.632 21:27:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:22:43.632 21:27:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.632 21:27:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.632 21:27:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:43.632 21:27:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.632 21:27:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:43.632 21:27:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.632 21:27:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.632 21:27:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.632 21:27:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.632 21:27:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.632 21:27:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.632 21:27:58 -- paths/export.sh@5 -- # export PATH 00:22:43.632 21:27:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.632 21:27:58 -- nvmf/common.sh@47 -- # : 0 00:22:43.632 21:27:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:43.632 21:27:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:43.632 21:27:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.632 21:27:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.632 21:27:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.632 21:27:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:43.632 21:27:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:43.632 21:27:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:43.632 21:27:58 -- host/fio.sh@12 -- # nvmftestinit 00:22:43.632 21:27:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:43.632 21:27:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.632 21:27:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:43.632 21:27:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:43.632 21:27:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:43.632 21:27:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.632 21:27:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.632 21:27:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.632 21:27:58 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:22:43.632 21:27:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:43.632 21:27:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:43.632 21:27:58 -- common/autotest_common.sh@10 -- # set +x 00:22:48.908 21:28:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:48.908 21:28:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:48.908 21:28:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:48.908 21:28:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:48.908 21:28:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:48.908 21:28:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:48.908 21:28:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:48.908 21:28:03 -- nvmf/common.sh@295 -- # net_devs=() 00:22:48.908 21:28:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:48.908 21:28:03 -- nvmf/common.sh@296 -- # e810=() 00:22:48.908 21:28:03 -- nvmf/common.sh@296 -- # local -ga e810 00:22:48.908 21:28:03 -- nvmf/common.sh@297 -- # x722=() 00:22:48.908 21:28:03 -- nvmf/common.sh@297 -- # local -ga x722 00:22:48.908 21:28:03 -- nvmf/common.sh@298 -- # mlx=() 00:22:48.908 21:28:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:48.908 21:28:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.908 21:28:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.908 21:28:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.908 21:28:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.908 21:28:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.908 21:28:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.908 21:28:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.908 21:28:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.908 21:28:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.908 21:28:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.908 21:28:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.908 21:28:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:48.908 21:28:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:48.908 21:28:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.908 21:28:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:48.908 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:48.908 21:28:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:48.908 21:28:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:48.908 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:48.908 21:28:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:48.908 21:28:03 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:22:48.908 21:28:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.908 21:28:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.908 21:28:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:48.908 21:28:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.908 21:28:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:48.908 Found net devices under 0000:27:00.0: cvl_0_0 00:22:48.908 21:28:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.908 21:28:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:48.908 21:28:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.908 21:28:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:48.908 21:28:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.909 21:28:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:48.909 Found net devices under 0000:27:00.1: cvl_0_1 00:22:48.909 21:28:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.909 21:28:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:48.909 21:28:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:48.909 21:28:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:48.909 21:28:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:48.909 21:28:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:48.909 21:28:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.909 21:28:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.909 21:28:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.909 21:28:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:48.909 21:28:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.909 21:28:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.909 21:28:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:48.909 21:28:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.909 21:28:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.909 21:28:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:48.909 21:28:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:48.909 21:28:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.909 21:28:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.909 21:28:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.909 21:28:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.909 21:28:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:48.909 21:28:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.909 21:28:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.909 21:28:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.909 21:28:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:48.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:22:48.909 00:22:48.909 --- 10.0.0.2 ping statistics --- 00:22:48.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.909 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:22:48.909 21:28:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:22:48.909 00:22:48.909 --- 10.0.0.1 ping statistics --- 00:22:48.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.909 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:22:48.909 21:28:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.909 21:28:03 -- nvmf/common.sh@411 -- # return 0 00:22:48.909 21:28:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:48.909 21:28:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.909 21:28:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:48.909 21:28:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:48.909 21:28:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.909 21:28:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:48.909 21:28:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:48.909 21:28:03 -- host/fio.sh@14 -- # [[ y != y ]] 00:22:48.909 21:28:03 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:48.909 21:28:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:48.909 21:28:03 -- common/autotest_common.sh@10 -- # set +x 00:22:49.167 21:28:03 -- host/fio.sh@22 -- # nvmfpid=1300527 00:22:49.167 21:28:03 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.167 21:28:03 -- host/fio.sh@26 -- # waitforlisten 1300527 00:22:49.167 21:28:03 -- common/autotest_common.sh@817 -- # '[' -z 1300527 ']' 00:22:49.167 21:28:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.167 21:28:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:49.167 21:28:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.167 21:28:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:49.167 21:28:03 -- common/autotest_common.sh@10 -- # set +x 00:22:49.167 21:28:03 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:49.167 [2024-04-24 21:28:03.949125] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:22:49.167 [2024-04-24 21:28:03.949225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.167 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.167 [2024-04-24 21:28:04.068437] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.426 [2024-04-24 21:28:04.162636] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.426 [2024-04-24 21:28:04.162671] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.426 [2024-04-24 21:28:04.162682] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.426 [2024-04-24 21:28:04.162690] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.426 [2024-04-24 21:28:04.162697] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.426 [2024-04-24 21:28:04.162768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.426 [2024-04-24 21:28:04.162875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.426 [2024-04-24 21:28:04.162976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.426 [2024-04-24 21:28:04.162986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.686 21:28:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:49.686 21:28:04 -- common/autotest_common.sh@850 -- # return 0 00:22:49.686 21:28:04 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.686 21:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.686 21:28:04 -- common/autotest_common.sh@10 -- # set +x 00:22:49.946 [2024-04-24 21:28:04.656638] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.946 21:28:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.946 21:28:04 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:49.946 21:28:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:49.946 21:28:04 -- common/autotest_common.sh@10 -- # set +x 00:22:49.946 21:28:04 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:49.946 21:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.946 21:28:04 -- common/autotest_common.sh@10 -- # set +x 00:22:49.946 Malloc1 00:22:49.946 21:28:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.946 21:28:04 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:49.946 21:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.946 21:28:04 -- common/autotest_common.sh@10 -- # set +x 00:22:49.946 21:28:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.946 21:28:04 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:49.946 21:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.946 21:28:04 -- common/autotest_common.sh@10 -- # set +x 00:22:49.946 21:28:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.946 21:28:04 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.946 21:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.946 21:28:04 -- common/autotest_common.sh@10 -- # set +x 00:22:49.946 [2024-04-24 21:28:04.760919] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.946 21:28:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.946 21:28:04 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:49.946 21:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.946 21:28:04 -- common/autotest_common.sh@10 -- # set +x 00:22:49.946 21:28:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.946 21:28:04 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme 00:22:49.946 21:28:04 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:49.947 21:28:04 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:49.947 21:28:04 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:49.947 21:28:04 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:49.947 21:28:04 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:49.947 21:28:04 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.947 21:28:04 -- common/autotest_common.sh@1327 -- # shift 00:22:49.947 21:28:04 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:49.947 21:28:04 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.947 21:28:04 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.947 21:28:04 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:49.947 21:28:04 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:49.947 21:28:04 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:49.947 21:28:04 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:49.947 21:28:04 -- common/autotest_common.sh@1333 -- # break 00:22:49.947 21:28:04 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:49.947 21:28:04 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:50.516 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:50.516 fio-3.35 00:22:50.516 Starting 1 thread 00:22:50.516 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.051 00:22:53.051 test: (groupid=0, jobs=1): err= 0: pid=1301143: Wed Apr 24 21:28:07 2024 00:22:53.051 read: IOPS=12.2k, BW=47.8MiB/s (50.1MB/s)(95.8MiB/2005msec) 00:22:53.051 slat (nsec): min=1569, max=83508, avg=1721.30, stdev=782.71 00:22:53.051 clat (usec): min=2593, max=9993, avg=5796.86, stdev=409.63 00:22:53.051 lat (usec): min=2604, max=9995, avg=5798.58, stdev=409.58 00:22:53.051 clat percentiles (usec): 00:22:53.051 | 1.00th=[ 4883], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5473], 00:22:53.051 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5866], 00:22:53.051 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6259], 95.00th=[ 6390], 00:22:53.051 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 8455], 99.95th=[ 9241], 00:22:53.051 | 99.99th=[ 9896] 00:22:53.051 bw ( KiB/s): min=47936, max=49680, per=99.97%, avg=48930.00, stdev=794.55, samples=4 00:22:53.051 iops : min=11984, max=12420, avg=12232.50, stdev=198.64, samples=4 00:22:53.051 write: IOPS=12.2k, BW=47.6MiB/s (50.0MB/s)(95.5MiB/2005msec); 0 zone resets 00:22:53.051 slat (nsec): min=1618, max=77055, avg=1804.46, stdev=566.21 00:22:53.051 clat (usec): min=1004, max=8731, avg=4634.17, stdev=349.84 00:22:53.051 lat (usec): min=1012, max=8733, avg=4635.97, stdev=349.81 00:22:53.051 clat percentiles (usec): 00:22:53.051 | 1.00th=[ 3851], 5.00th=[ 4113], 10.00th=[ 4228], 20.00th=[ 4359], 00:22:53.051 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:22:53.051 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5145], 00:22:53.051 | 99.00th=[ 5407], 99.50th=[ 5669], 99.90th=[ 6718], 99.95th=[ 7242], 00:22:53.051 | 99.99th=[ 8717] 00:22:53.051 bw ( KiB/s): min=48576, max=49160, per=100.00%, avg=48788.00, stdev=275.00, samples=4 00:22:53.051 iops : min=12144, max=12290, avg=12197.00, stdev=68.75, samples=4 00:22:53.051 lat (msec) : 2=0.02%, 4=1.31%, 10=98.67% 00:22:53.051 cpu : usr=85.23%, sys=14.42%, ctx=3, majf=0, minf=1526 00:22:53.051 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:53.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:53.051 issued rwts: total=24534,24452,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.051 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:53.051 00:22:53.051 Run status group 0 (all jobs): 00:22:53.051 READ: bw=47.8MiB/s (50.1MB/s), 47.8MiB/s-47.8MiB/s (50.1MB/s-50.1MB/s), io=95.8MiB (100MB), run=2005-2005msec 00:22:53.051 WRITE: bw=47.6MiB/s (50.0MB/s), 47.6MiB/s-47.6MiB/s (50.0MB/s-50.0MB/s), io=95.5MiB (100MB), run=2005-2005msec 00:22:53.051 ----------------------------------------------------- 00:22:53.051 Suppressions used: 00:22:53.051 count bytes template 00:22:53.051 1 57 /usr/src/fio/parse.c 00:22:53.051 1 8 libtcmalloc_minimal.so 00:22:53.051 ----------------------------------------------------- 00:22:53.051 00:22:53.051 21:28:07 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:53.051 21:28:07 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:53.051 21:28:07 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:53.051 21:28:07 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:53.051 21:28:07 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:53.051 21:28:07 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:22:53.051 21:28:07 -- common/autotest_common.sh@1327 -- # shift 00:22:53.051 21:28:07 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:53.051 21:28:07 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:53.051 21:28:07 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:22:53.051 21:28:07 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:53.051 21:28:07 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:53.051 21:28:07 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:53.051 21:28:07 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:53.051 21:28:07 -- common/autotest_common.sh@1333 -- # break 00:22:53.051 21:28:07 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:53.051 21:28:07 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:53.629 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:53.629 fio-3.35 00:22:53.629 Starting 1 thread 00:22:53.629 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.165 [2024-04-24 21:28:10.723245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:22:56.165 00:22:56.165 test: (groupid=0, jobs=1): err= 0: pid=1301904: Wed Apr 24 21:28:10 2024 00:22:56.165 read: IOPS=8708, BW=136MiB/s (143MB/s)(273MiB/2005msec) 00:22:56.165 slat (usec): min=2, max=148, avg= 3.97, stdev= 1.82 00:22:56.166 clat (usec): min=1211, max=17843, avg=8769.56, stdev=2904.72 00:22:56.166 lat (usec): min=1214, max=17847, avg=8773.53, stdev=2905.49 00:22:56.166 clat percentiles (usec): 00:22:56.166 | 1.00th=[ 3720], 5.00th=[ 4621], 10.00th=[ 5211], 20.00th=[ 6128], 00:22:56.166 | 30.00th=[ 6915], 40.00th=[ 7570], 50.00th=[ 8356], 60.00th=[ 9372], 00:22:56.166 | 70.00th=[10290], 80.00th=[11469], 90.00th=[12911], 95.00th=[13960], 00:22:56.166 | 99.00th=[15795], 99.50th=[16188], 99.90th=[16909], 99.95th=[17171], 00:22:56.166 | 99.99th=[17695] 00:22:56.166 bw ( KiB/s): min=49888, max=93184, per=49.82%, avg=69416.00, stdev=18150.13, samples=4 00:22:56.166 iops : min= 3118, max= 5824, avg=4338.50, stdev=1134.38, samples=4 00:22:56.166 write: IOPS=4925, BW=77.0MiB/s (80.7MB/s)(141MiB/1831msec); 0 zone resets 00:22:56.166 slat (usec): min=28, max=201, avg=42.35, stdev=11.68 00:22:56.166 clat (usec): min=2958, max=19375, avg=10410.10, stdev=2598.85 00:22:56.166 lat (usec): min=3009, max=19427, avg=10452.46, stdev=2607.66 00:22:56.166 clat percentiles (usec): 00:22:56.166 | 1.00th=[ 5735], 5.00th=[ 6521], 10.00th=[ 7046], 20.00th=[ 7832], 00:22:56.166 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[10552], 60.00th=[11338], 00:22:56.166 | 70.00th=[11863], 80.00th=[12780], 90.00th=[13829], 95.00th=[14746], 00:22:56.166 | 99.00th=[16319], 99.50th=[16712], 99.90th=[17171], 99.95th=[17433], 00:22:56.166 | 99.99th=[19268] 00:22:56.166 bw ( KiB/s): min=51136, max=97280, per=91.55%, avg=72152.00, stdev=19205.46, samples=4 00:22:56.166 iops : min= 3196, max= 6080, avg=4509.50, stdev=1200.34, samples=4 00:22:56.166 lat (msec) : 2=0.08%, 4=1.09%, 10=57.90%, 20=40.93% 00:22:56.166 cpu : usr=87.33%, sys=12.23%, ctx=9, majf=0, minf=2252 00:22:56.166 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:56.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:56.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:56.166 issued rwts: total=17461,9019,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:56.166 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:56.166 00:22:56.166 Run status group 0 (all jobs): 00:22:56.166 READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=273MiB (286MB), run=2005-2005msec 00:22:56.166 WRITE: bw=77.0MiB/s (80.7MB/s), 77.0MiB/s-77.0MiB/s (80.7MB/s-80.7MB/s), io=141MiB (148MB), run=1831-1831msec 00:22:56.166 ----------------------------------------------------- 00:22:56.166 Suppressions used: 00:22:56.166 count bytes template 00:22:56.166 1 57 /usr/src/fio/parse.c 00:22:56.166 235 22560 /usr/src/fio/iolog.c 00:22:56.166 1 8 libtcmalloc_minimal.so 00:22:56.166 ----------------------------------------------------- 00:22:56.166 00:22:56.166 21:28:11 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:56.166 21:28:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.166 21:28:11 -- common/autotest_common.sh@10 -- # set +x 00:22:56.166 21:28:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.166 21:28:11 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:56.166 21:28:11 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:56.166 21:28:11 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:56.166 21:28:11 -- host/fio.sh@84 -- # nvmftestfini 00:22:56.166 21:28:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:56.166 21:28:11 -- nvmf/common.sh@117 -- # sync 00:22:56.166 21:28:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.166 21:28:11 -- nvmf/common.sh@120 -- # set +e 00:22:56.166 21:28:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.166 21:28:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.166 rmmod nvme_tcp 00:22:56.166 rmmod nvme_fabrics 00:22:56.166 rmmod nvme_keyring 00:22:56.166 21:28:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.166 21:28:11 -- nvmf/common.sh@124 -- # set -e 00:22:56.166 21:28:11 -- nvmf/common.sh@125 -- # return 0 00:22:56.166 21:28:11 -- nvmf/common.sh@478 -- # '[' -n 1300527 ']' 00:22:56.166 21:28:11 -- nvmf/common.sh@479 -- # killprocess 1300527 00:22:56.166 21:28:11 -- common/autotest_common.sh@936 -- # '[' -z 1300527 ']' 00:22:56.166 21:28:11 -- common/autotest_common.sh@940 -- # kill -0 1300527 00:22:56.166 21:28:11 -- common/autotest_common.sh@941 -- # uname 00:22:56.166 21:28:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:56.166 21:28:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1300527 00:22:56.424 21:28:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:56.424 21:28:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:56.424 21:28:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1300527' 00:22:56.424 killing process with pid 1300527 00:22:56.424 21:28:11 -- common/autotest_common.sh@955 -- # kill 1300527 00:22:56.424 21:28:11 -- common/autotest_common.sh@960 -- # wait 1300527 00:22:56.990 21:28:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:56.990 21:28:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:56.990 21:28:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:56.990 21:28:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.990 21:28:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:56.990 21:28:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.990 21:28:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.990 21:28:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.891 21:28:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:58.891 00:22:58.891 real 0m15.514s 00:22:58.891 user 1m8.032s 00:22:58.891 sys 0m5.858s 00:22:58.891 21:28:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:58.891 21:28:13 -- common/autotest_common.sh@10 -- # set +x 00:22:58.891 ************************************ 00:22:58.891 END TEST nvmf_fio_host 00:22:58.891 ************************************ 00:22:58.891 21:28:13 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:58.891 21:28:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:58.891 21:28:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:58.891 21:28:13 -- common/autotest_common.sh@10 -- # set +x 00:22:59.150 ************************************ 00:22:59.150 START TEST nvmf_failover 00:22:59.150 ************************************ 00:22:59.150 21:28:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:59.150 * Looking for test storage... 00:22:59.150 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:59.150 21:28:13 -- host/failover.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.150 21:28:13 -- nvmf/common.sh@7 -- # uname -s 00:22:59.150 21:28:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.150 21:28:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.150 21:28:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.150 21:28:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.150 21:28:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.150 21:28:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.150 21:28:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.150 21:28:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.150 21:28:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.150 21:28:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.150 21:28:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:22:59.150 21:28:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:22:59.150 21:28:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.150 21:28:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.150 21:28:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:59.150 21:28:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.150 21:28:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:59.150 21:28:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.150 21:28:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.150 21:28:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.150 21:28:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.150 21:28:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.150 21:28:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.150 21:28:13 -- paths/export.sh@5 -- # export PATH 00:22:59.150 21:28:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.150 21:28:13 -- nvmf/common.sh@47 -- # : 0 00:22:59.150 21:28:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:59.150 21:28:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:59.150 21:28:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.150 21:28:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.150 21:28:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.150 21:28:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:59.150 21:28:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:59.150 21:28:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:59.150 21:28:13 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:59.150 21:28:13 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:59.150 21:28:13 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:59.150 21:28:13 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.150 21:28:13 -- host/failover.sh@18 -- # nvmftestinit 00:22:59.150 21:28:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:59.150 21:28:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.150 21:28:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:59.150 21:28:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:59.150 21:28:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:59.150 21:28:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.150 21:28:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.150 21:28:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.150 21:28:13 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:22:59.150 21:28:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:59.150 21:28:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:59.150 21:28:13 -- common/autotest_common.sh@10 -- # set +x 00:23:04.562 21:28:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:04.562 21:28:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:04.562 21:28:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:04.562 21:28:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:04.562 21:28:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:04.562 21:28:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:04.562 21:28:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:04.562 21:28:18 -- nvmf/common.sh@295 -- # net_devs=() 00:23:04.562 21:28:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:04.562 21:28:18 -- nvmf/common.sh@296 -- # e810=() 00:23:04.562 21:28:18 -- nvmf/common.sh@296 -- # local -ga e810 00:23:04.562 21:28:18 -- nvmf/common.sh@297 -- # x722=() 00:23:04.562 21:28:18 -- nvmf/common.sh@297 -- # local -ga x722 00:23:04.562 21:28:18 -- nvmf/common.sh@298 -- # mlx=() 00:23:04.562 21:28:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:04.563 21:28:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.563 21:28:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.563 21:28:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.563 21:28:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.563 21:28:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.563 21:28:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.563 21:28:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.563 21:28:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.563 21:28:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.563 21:28:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.563 21:28:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.563 21:28:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:04.563 21:28:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:04.563 21:28:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.563 21:28:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:04.563 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:04.563 21:28:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.563 21:28:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:04.563 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:04.563 21:28:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:04.563 21:28:18 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.563 21:28:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.563 21:28:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:04.563 21:28:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.563 21:28:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:04.563 Found net devices under 0000:27:00.0: cvl_0_0 00:23:04.563 21:28:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.563 21:28:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.563 21:28:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.563 21:28:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:04.563 21:28:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.563 21:28:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:04.563 Found net devices under 0000:27:00.1: cvl_0_1 00:23:04.563 21:28:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.563 21:28:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:04.563 21:28:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:04.563 21:28:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:04.563 21:28:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:04.563 21:28:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.563 21:28:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.563 21:28:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.563 21:28:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:04.563 21:28:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.563 21:28:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.563 21:28:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:04.563 21:28:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.563 21:28:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.563 21:28:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:04.563 21:28:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:04.563 21:28:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.563 21:28:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.563 21:28:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.563 21:28:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.563 21:28:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:04.563 21:28:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.563 21:28:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.563 21:28:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.563 21:28:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:04.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:23:04.563 00:23:04.563 --- 10.0.0.2 ping statistics --- 00:23:04.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.563 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:23:04.563 21:28:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:23:04.563 00:23:04.563 --- 10.0.0.1 ping statistics --- 00:23:04.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.563 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:23:04.563 21:28:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.563 21:28:19 -- nvmf/common.sh@411 -- # return 0 00:23:04.563 21:28:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:04.563 21:28:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.563 21:28:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:04.563 21:28:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:04.563 21:28:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.563 21:28:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:04.563 21:28:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:04.563 21:28:19 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:04.563 21:28:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:04.563 21:28:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:04.563 21:28:19 -- common/autotest_common.sh@10 -- # set +x 00:23:04.563 21:28:19 -- nvmf/common.sh@470 -- # nvmfpid=1306367 00:23:04.563 21:28:19 -- nvmf/common.sh@471 -- # waitforlisten 1306367 00:23:04.563 21:28:19 -- common/autotest_common.sh@817 -- # '[' -z 1306367 ']' 00:23:04.563 21:28:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.563 21:28:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:04.563 21:28:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.563 21:28:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:04.563 21:28:19 -- common/autotest_common.sh@10 -- # set +x 00:23:04.563 21:28:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:04.563 [2024-04-24 21:28:19.156591] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:23:04.563 [2024-04-24 21:28:19.156696] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.563 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.563 [2024-04-24 21:28:19.276770] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:04.563 [2024-04-24 21:28:19.370660] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.563 [2024-04-24 21:28:19.370699] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.563 [2024-04-24 21:28:19.370708] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.563 [2024-04-24 21:28:19.370721] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.563 [2024-04-24 21:28:19.370729] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.563 [2024-04-24 21:28:19.370877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.563 [2024-04-24 21:28:19.370986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.563 [2024-04-24 21:28:19.370995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.131 21:28:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:05.131 21:28:19 -- common/autotest_common.sh@850 -- # return 0 00:23:05.131 21:28:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:05.131 21:28:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:05.131 21:28:19 -- common/autotest_common.sh@10 -- # set +x 00:23:05.131 21:28:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.131 21:28:19 -- host/failover.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:05.131 [2024-04-24 21:28:19.998264] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.131 21:28:20 -- host/failover.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:05.392 Malloc0 00:23:05.392 21:28:20 -- host/failover.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:05.651 21:28:20 -- host/failover.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:05.651 21:28:20 -- host/failover.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:05.912 [2024-04-24 21:28:20.665047] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.912 21:28:20 -- host/failover.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:05.912 [2024-04-24 21:28:20.821119] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:05.912 21:28:20 -- host/failover.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:06.170 [2024-04-24 21:28:20.977338] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:06.170 21:28:20 -- host/failover.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:06.170 21:28:20 -- host/failover.sh@31 -- # bdevperf_pid=1306722 00:23:06.170 21:28:21 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.170 21:28:21 -- host/failover.sh@34 -- # waitforlisten 1306722 /var/tmp/bdevperf.sock 00:23:06.171 21:28:21 -- common/autotest_common.sh@817 -- # '[' -z 1306722 ']' 00:23:06.171 21:28:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.171 21:28:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:06.171 21:28:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.171 21:28:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:06.171 21:28:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.106 21:28:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:07.106 21:28:21 -- common/autotest_common.sh@850 -- # return 0 00:23:07.106 21:28:21 -- host/failover.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.366 NVMe0n1 00:23:07.366 21:28:22 -- host/failover.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.627 00:23:07.627 21:28:22 -- host/failover.sh@39 -- # run_test_pid=1307024 00:23:07.627 21:28:22 -- host/failover.sh@41 -- # sleep 1 00:23:07.627 21:28:22 -- host/failover.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.562 21:28:23 -- host/failover.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:08.821 [2024-04-24 21:28:23.626388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626500] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 [2024-04-24 21:28:23.626818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:23:08.821 21:28:23 -- host/failover.sh@45 -- # sleep 3 00:23:12.117 21:28:26 -- host/failover.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.117 00:23:12.117 21:28:26 -- host/failover.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:12.117 [2024-04-24 21:28:27.053777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 [2024-04-24 21:28:27.053999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:23:12.117 21:28:27 -- host/failover.sh@50 -- # sleep 3 00:23:15.412 21:28:30 -- host/failover.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.412 [2024-04-24 21:28:30.217125] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.412 21:28:30 -- host/failover.sh@55 -- # sleep 1 00:23:16.347 21:28:31 -- host/failover.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:16.606 [2024-04-24 21:28:31.380780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.606 [2024-04-24 21:28:31.380833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.606 [2024-04-24 21:28:31.380842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.380997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 [2024-04-24 21:28:31.381147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:23:16.607 21:28:31 -- host/failover.sh@59 -- # wait 1307024 00:23:23.192 0 00:23:23.192 21:28:37 -- host/failover.sh@61 -- # killprocess 1306722 00:23:23.192 21:28:37 -- common/autotest_common.sh@936 -- # '[' -z 1306722 ']' 00:23:23.192 21:28:37 -- common/autotest_common.sh@940 -- # kill -0 1306722 00:23:23.192 21:28:37 -- common/autotest_common.sh@941 -- # uname 00:23:23.192 21:28:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:23.192 21:28:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1306722 00:23:23.192 21:28:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:23.192 21:28:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:23.192 21:28:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1306722' 00:23:23.192 killing process with pid 1306722 00:23:23.192 21:28:37 -- common/autotest_common.sh@955 -- # kill 1306722 00:23:23.192 21:28:37 -- common/autotest_common.sh@960 -- # wait 1306722 00:23:23.192 21:28:38 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:23.192 [2024-04-24 21:28:21.075440] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:23:23.192 [2024-04-24 21:28:21.075580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306722 ] 00:23:23.192 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.192 [2024-04-24 21:28:21.190178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.192 [2024-04-24 21:28:21.279722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.192 Running I/O for 15 seconds... 00:23:23.192 [2024-04-24 21:28:23.628234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.192 [2024-04-24 21:28:23.628744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.192 [2024-04-24 21:28:23.628764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.192 [2024-04-24 21:28:23.628782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.192 [2024-04-24 21:28:23.628799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.192 [2024-04-24 21:28:23.628816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.192 [2024-04-24 21:28:23.628833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.192 [2024-04-24 21:28:23.628852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.192 [2024-04-24 21:28:23.628869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.192 [2024-04-24 21:28:23.628887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.192 [2024-04-24 21:28:23.628904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.192 [2024-04-24 21:28:23.628922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.192 [2024-04-24 21:28:23.628939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.192 [2024-04-24 21:28:23.628957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.192 [2024-04-24 21:28:23.628968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.192 [2024-04-24 21:28:23.628976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.628986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.628997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.629983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.629991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.630009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.193 [2024-04-24 21:28:23.630025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.193 [2024-04-24 21:28:23.630043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.193 [2024-04-24 21:28:23.630060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.193 [2024-04-24 21:28:23.630100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97424 len:8 PRP1 0x0 PRP2 0x0 00:23:23.193 [2024-04-24 21:28:23.630110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.193 [2024-04-24 21:28:23.630171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.193 [2024-04-24 21:28:23.630190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.193 [2024-04-24 21:28:23.630207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.193 [2024-04-24 21:28:23.630224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004a40 is same with the state(5) to be set 00:23:23.193 [2024-04-24 21:28:23.630417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.193 [2024-04-24 21:28:23.630426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.193 [2024-04-24 21:28:23.630437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97432 len:8 PRP1 0x0 PRP2 0x0 00:23:23.193 [2024-04-24 21:28:23.630446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.193 [2024-04-24 21:28:23.630464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.193 [2024-04-24 21:28:23.630472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97440 len:8 PRP1 0x0 PRP2 0x0 00:23:23.193 [2024-04-24 21:28:23.630480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.193 [2024-04-24 21:28:23.630494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.193 [2024-04-24 21:28:23.630501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97448 len:8 PRP1 0x0 PRP2 0x0 00:23:23.193 [2024-04-24 21:28:23.630508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.193 [2024-04-24 21:28:23.630522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.193 [2024-04-24 21:28:23.630529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97456 len:8 PRP1 0x0 PRP2 0x0 00:23:23.193 [2024-04-24 21:28:23.630538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.193 [2024-04-24 21:28:23.630551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.193 [2024-04-24 21:28:23.630558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97464 len:8 PRP1 0x0 PRP2 0x0 00:23:23.193 [2024-04-24 21:28:23.630567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.193 [2024-04-24 21:28:23.630577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.193 [2024-04-24 21:28:23.630583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97472 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97480 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97488 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97496 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97504 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97512 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97520 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97528 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97536 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97544 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97552 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97560 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97568 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97576 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.630979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.630985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.630991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97584 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.630999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97592 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97600 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96792 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96800 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96808 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96816 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96824 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96832 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96840 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96584 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96592 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96600 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96608 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96616 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96624 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96632 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96640 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96648 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.194 [2024-04-24 21:28:23.631704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.194 [2024-04-24 21:28:23.631711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96696 len:8 PRP1 0x0 PRP2 0x0 00:23:23.194 [2024-04-24 21:28:23.631719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.194 [2024-04-24 21:28:23.631727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.631733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.631741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96704 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.631749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.631756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.631763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.631770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96712 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.631778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.631786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.631792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.631799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96720 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.631808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.631816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.631822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.631829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96728 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.631836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.631845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.631851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.631858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.631865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.631873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.631879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.631886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96744 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.631894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.631901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.631907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.631914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96752 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.631922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.631929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.631937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.631944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96760 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.631952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.631959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.631965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.631972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96768 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.631985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.631993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96848 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96856 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96864 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96872 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96880 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96888 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96896 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96904 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96912 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96920 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96928 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96936 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96944 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96960 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96968 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96976 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96984 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.632524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.632532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.632538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.632544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96992 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97000 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97008 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97016 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97024 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97032 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97040 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97048 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97056 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97064 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97072 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97080 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97088 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97096 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97104 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97112 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97120 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.195 [2024-04-24 21:28:23.636815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97128 len:8 PRP1 0x0 PRP2 0x0 00:23:23.195 [2024-04-24 21:28:23.636824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.195 [2024-04-24 21:28:23.636832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.195 [2024-04-24 21:28:23.636838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.636846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97136 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.636854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.636864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.636870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.636876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97144 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.636884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.636891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.636898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.636904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97152 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.636913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.636920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.636926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.636933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97160 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.636940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.636947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.636953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.636960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97168 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.636968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.636976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.636981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.636989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97176 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.636997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97184 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97192 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97200 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97208 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97368 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97376 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97384 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97392 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97400 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97408 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97416 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96776 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96784 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.637937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.196 [2024-04-24 21:28:23.637943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.196 [2024-04-24 21:28:23.637951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97424 len:8 PRP1 0x0 PRP2 0x0 00:23:23.196 [2024-04-24 21:28:23.637958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:23.638082] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007240 was disconnected and freed. reset controller. 00:23:23.196 [2024-04-24 21:28:23.638100] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:23.196 [2024-04-24 21:28:23.638114] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:23.196 [2024-04-24 21:28:23.640777] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:23.196 [2024-04-24 21:28:23.640806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:23:23.196 [2024-04-24 21:28:23.790249] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:23.196 [2024-04-24 21:28:27.054455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.196 [2024-04-24 21:28:27.054505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:27.054518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.196 [2024-04-24 21:28:27.054527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:27.054537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.196 [2024-04-24 21:28:27.054545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:27.054554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.196 [2024-04-24 21:28:27.054564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:27.054572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004a40 is same with the state(5) to be set 00:23:23.196 [2024-04-24 21:28:27.054647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.196 [2024-04-24 21:28:27.054660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:27.054680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.196 [2024-04-24 21:28:27.054688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:27.054699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.196 [2024-04-24 21:28:27.054707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:27.054717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.196 [2024-04-24 21:28:27.054724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.196 [2024-04-24 21:28:27.054735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.054743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.054760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.054782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.054800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.054818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.054835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.054853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.054870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.054887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.054905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.054922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.054940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.054957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.054975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.054985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.054992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.197 [2024-04-24 21:28:27.055822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.055840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.055857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.055874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.055892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.055911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.055928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.197 [2024-04-24 21:28:27.055938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.197 [2024-04-24 21:28:27.055946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.055955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.055962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.055972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.055979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.055988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.055996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.198 [2024-04-24 21:28:27.056496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:27.056891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.056916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.198 [2024-04-24 21:28:27.056924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.198 [2024-04-24 21:28:27.056934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63472 len:8 PRP1 0x0 PRP2 0x0 00:23:23.198 [2024-04-24 21:28:27.056943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:27.057060] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000008240 was disconnected and freed. reset controller. 00:23:23.198 [2024-04-24 21:28:27.057075] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:23.198 [2024-04-24 21:28:27.057088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:23.198 [2024-04-24 21:28:27.059711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:23.198 [2024-04-24 21:28:27.059740] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:23:23.198 [2024-04-24 21:28:27.134520] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:23.198 [2024-04-24 21:28:31.381653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.381989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.381998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.382006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.198 [2024-04-24 21:28:31.382015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.198 [2024-04-24 21:28:31.382023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.382976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.382986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.382994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.383004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.199 [2024-04-24 21:28:31.383011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.383020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.383028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.383037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.383044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.383053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.383060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.383070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.383078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.383087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.383095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.383104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.383111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.383120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.383128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.383137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.383144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.383153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.383161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.383170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.383177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.383187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.383194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.199 [2024-04-24 21:28:31.383203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.199 [2024-04-24 21:28:31.383211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:23.200 [2024-04-24 21:28:31.383561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92600 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.383630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92608 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.383662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92616 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.383691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92624 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.383719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92632 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.383748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92640 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.383777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92648 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.383807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92656 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.383835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92664 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.383863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92672 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.383891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92680 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.383919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92688 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.383949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92696 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.383971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.383977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.383985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92704 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.383993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.384001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.384006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.384013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92712 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.384021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.384028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.384034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.384041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92720 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.384048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.384056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.384062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.384069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92728 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.384076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.384084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.384091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.384097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92736 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.384105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.384113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.384119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.384126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92744 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.384134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.384141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.384147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.384154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92752 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.384161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.384174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.384180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.384187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92760 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.384195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.384203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:23.200 [2024-04-24 21:28:31.384210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:23.200 [2024-04-24 21:28:31.384216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92768 len:8 PRP1 0x0 PRP2 0x0 00:23:23.200 [2024-04-24 21:28:31.384224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.384346] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000009440 was disconnected and freed. reset controller. 00:23:23.200 [2024-04-24 21:28:31.384360] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:23.200 [2024-04-24 21:28:31.384391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.200 [2024-04-24 21:28:31.384405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.384415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.200 [2024-04-24 21:28:31.384423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.384432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.200 [2024-04-24 21:28:31.384440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.384448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.200 [2024-04-24 21:28:31.384456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.200 [2024-04-24 21:28:31.384463] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:23.200 [2024-04-24 21:28:31.384515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:23:23.200 [2024-04-24 21:28:31.387037] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:23.200 [2024-04-24 21:28:31.548995] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:23.200 00:23:23.200 Latency(us) 00:23:23.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.200 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:23.200 Verification LBA range: start 0x0 length 0x4000 00:23:23.200 NVMe0n1 : 15.00 11183.91 43.69 1339.23 0.00 10200.77 793.33 18212.11 00:23:23.200 =================================================================================================================== 00:23:23.200 Total : 11183.91 43.69 1339.23 0.00 10200.77 793.33 18212.11 00:23:23.200 Received shutdown signal, test time was about 15.000000 seconds 00:23:23.200 00:23:23.200 Latency(us) 00:23:23.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.200 =================================================================================================================== 00:23:23.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.200 21:28:38 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:23.200 21:28:38 -- host/failover.sh@65 -- # count=3 00:23:23.200 21:28:38 -- host/failover.sh@67 -- # (( count != 3 )) 00:23:23.200 21:28:38 -- host/failover.sh@73 -- # bdevperf_pid=1310007 00:23:23.200 21:28:38 -- host/failover.sh@75 -- # waitforlisten 1310007 /var/tmp/bdevperf.sock 00:23:23.200 21:28:38 -- common/autotest_common.sh@817 -- # '[' -z 1310007 ']' 00:23:23.200 21:28:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.200 21:28:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:23.200 21:28:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.200 21:28:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:23.200 21:28:38 -- common/autotest_common.sh@10 -- # set +x 00:23:23.200 21:28:38 -- host/failover.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:24.136 21:28:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:24.136 21:28:38 -- common/autotest_common.sh@850 -- # return 0 00:23:24.136 21:28:38 -- host/failover.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:24.136 [2024-04-24 21:28:38.931296] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:24.136 21:28:38 -- host/failover.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:24.136 [2024-04-24 21:28:39.071383] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:24.136 21:28:39 -- host/failover.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.708 NVMe0n1 00:23:24.708 21:28:39 -- host/failover.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.968 00:23:24.968 21:28:39 -- host/failover.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.968 00:23:24.968 21:28:39 -- host/failover.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.226 21:28:39 -- host/failover.sh@82 -- # grep -q NVMe0 00:23:25.226 21:28:40 -- host/failover.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.484 21:28:40 -- host/failover.sh@87 -- # sleep 3 00:23:28.774 21:28:43 -- host/failover.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:28.774 21:28:43 -- host/failover.sh@88 -- # grep -q NVMe0 00:23:28.774 21:28:43 -- host/failover.sh@90 -- # run_test_pid=1310922 00:23:28.774 21:28:43 -- host/failover.sh@92 -- # wait 1310922 00:23:28.774 21:28:43 -- host/failover.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:29.709 0 00:23:29.709 21:28:44 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:29.709 [2024-04-24 21:28:38.105018] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:23:29.709 [2024-04-24 21:28:38.105183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310007 ] 00:23:29.709 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.709 [2024-04-24 21:28:38.235133] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.709 [2024-04-24 21:28:38.330169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.709 [2024-04-24 21:28:40.187877] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:29.709 [2024-04-24 21:28:40.187968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.709 [2024-04-24 21:28:40.187984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.709 [2024-04-24 21:28:40.188000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.709 [2024-04-24 21:28:40.188008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.710 [2024-04-24 21:28:40.188017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.710 [2024-04-24 21:28:40.188025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.710 [2024-04-24 21:28:40.188033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.710 [2024-04-24 21:28:40.188041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.710 [2024-04-24 21:28:40.188050] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:29.710 [2024-04-24 21:28:40.188109] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:29.710 [2024-04-24 21:28:40.188135] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:23:29.710 [2024-04-24 21:28:40.201271] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:29.710 Running I/O for 1 seconds... 00:23:29.710 00:23:29.710 Latency(us) 00:23:29.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.710 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:29.710 Verification LBA range: start 0x0 length 0x4000 00:23:29.710 NVMe0n1 : 1.05 11130.76 43.48 0.00 0.00 11091.91 2224.77 46358.10 00:23:29.710 =================================================================================================================== 00:23:29.710 Total : 11130.76 43.48 0.00 0.00 11091.91 2224.77 46358.10 00:23:29.710 21:28:44 -- host/failover.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:29.710 21:28:44 -- host/failover.sh@95 -- # grep -q NVMe0 00:23:29.710 21:28:44 -- host/failover.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:29.970 21:28:44 -- host/failover.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:29.970 21:28:44 -- host/failover.sh@99 -- # grep -q NVMe0 00:23:29.970 21:28:44 -- host/failover.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:30.231 21:28:45 -- host/failover.sh@101 -- # sleep 3 00:23:33.527 21:28:48 -- host/failover.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:33.527 21:28:48 -- host/failover.sh@103 -- # grep -q NVMe0 00:23:33.527 21:28:48 -- host/failover.sh@108 -- # killprocess 1310007 00:23:33.527 21:28:48 -- common/autotest_common.sh@936 -- # '[' -z 1310007 ']' 00:23:33.527 21:28:48 -- common/autotest_common.sh@940 -- # kill -0 1310007 00:23:33.527 21:28:48 -- common/autotest_common.sh@941 -- # uname 00:23:33.527 21:28:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:33.527 21:28:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1310007 00:23:33.527 21:28:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:33.527 21:28:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:33.527 21:28:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1310007' 00:23:33.527 killing process with pid 1310007 00:23:33.527 21:28:48 -- common/autotest_common.sh@955 -- # kill 1310007 00:23:33.527 21:28:48 -- common/autotest_common.sh@960 -- # wait 1310007 00:23:33.785 21:28:48 -- host/failover.sh@110 -- # sync 00:23:33.785 21:28:48 -- host/failover.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:34.044 21:28:48 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:34.044 21:28:48 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:34.044 21:28:48 -- host/failover.sh@116 -- # nvmftestfini 00:23:34.044 21:28:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:34.044 21:28:48 -- nvmf/common.sh@117 -- # sync 00:23:34.044 21:28:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:34.044 21:28:48 -- nvmf/common.sh@120 -- # set +e 00:23:34.044 21:28:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:34.044 21:28:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:34.044 rmmod nvme_tcp 00:23:34.044 rmmod nvme_fabrics 00:23:34.044 rmmod nvme_keyring 00:23:34.044 21:28:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:34.044 21:28:48 -- nvmf/common.sh@124 -- # set -e 00:23:34.044 21:28:48 -- nvmf/common.sh@125 -- # return 0 00:23:34.044 21:28:48 -- nvmf/common.sh@478 -- # '[' -n 1306367 ']' 00:23:34.044 21:28:48 -- nvmf/common.sh@479 -- # killprocess 1306367 00:23:34.044 21:28:48 -- common/autotest_common.sh@936 -- # '[' -z 1306367 ']' 00:23:34.044 21:28:48 -- common/autotest_common.sh@940 -- # kill -0 1306367 00:23:34.044 21:28:48 -- common/autotest_common.sh@941 -- # uname 00:23:34.044 21:28:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:34.044 21:28:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1306367 00:23:34.044 21:28:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:34.044 21:28:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:34.044 21:28:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1306367' 00:23:34.044 killing process with pid 1306367 00:23:34.044 21:28:48 -- common/autotest_common.sh@955 -- # kill 1306367 00:23:34.044 21:28:48 -- common/autotest_common.sh@960 -- # wait 1306367 00:23:34.614 21:28:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:34.614 21:28:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:34.614 21:28:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:34.614 21:28:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:34.614 21:28:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:34.614 21:28:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.614 21:28:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.614 21:28:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.524 21:28:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:36.524 00:23:36.524 real 0m37.614s 00:23:36.524 user 2m0.852s 00:23:36.524 sys 0m6.684s 00:23:36.524 21:28:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:36.524 21:28:51 -- common/autotest_common.sh@10 -- # set +x 00:23:36.524 ************************************ 00:23:36.524 END TEST nvmf_failover 00:23:36.524 ************************************ 00:23:36.785 21:28:51 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:36.785 21:28:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:36.785 21:28:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:36.785 21:28:51 -- common/autotest_common.sh@10 -- # set +x 00:23:36.785 ************************************ 00:23:36.785 START TEST nvmf_discovery 00:23:36.785 ************************************ 00:23:36.785 21:28:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:36.785 * Looking for test storage... 00:23:36.785 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:23:36.785 21:28:51 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.785 21:28:51 -- nvmf/common.sh@7 -- # uname -s 00:23:36.785 21:28:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.785 21:28:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.785 21:28:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.785 21:28:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.785 21:28:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.785 21:28:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.785 21:28:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.785 21:28:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.785 21:28:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.785 21:28:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.785 21:28:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:23:36.785 21:28:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:23:36.785 21:28:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.785 21:28:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.785 21:28:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:36.785 21:28:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.785 21:28:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:36.785 21:28:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.785 21:28:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.785 21:28:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.785 21:28:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.785 21:28:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.785 21:28:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.785 21:28:51 -- paths/export.sh@5 -- # export PATH 00:23:36.785 21:28:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.785 21:28:51 -- nvmf/common.sh@47 -- # : 0 00:23:36.785 21:28:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:36.785 21:28:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:36.785 21:28:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.785 21:28:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.785 21:28:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.785 21:28:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:36.785 21:28:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:36.786 21:28:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:36.786 21:28:51 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:36.786 21:28:51 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:36.786 21:28:51 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:36.786 21:28:51 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:36.786 21:28:51 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:36.786 21:28:51 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:36.786 21:28:51 -- host/discovery.sh@25 -- # nvmftestinit 00:23:36.786 21:28:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:36.786 21:28:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.786 21:28:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:36.786 21:28:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:36.786 21:28:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:36.786 21:28:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.786 21:28:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.786 21:28:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.786 21:28:51 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:23:37.047 21:28:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:37.047 21:28:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:37.047 21:28:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.624 21:28:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:43.624 21:28:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:43.624 21:28:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:43.624 21:28:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:43.624 21:28:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:43.624 21:28:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:43.624 21:28:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:43.624 21:28:57 -- nvmf/common.sh@295 -- # net_devs=() 00:23:43.624 21:28:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:43.624 21:28:57 -- nvmf/common.sh@296 -- # e810=() 00:23:43.624 21:28:57 -- nvmf/common.sh@296 -- # local -ga e810 00:23:43.624 21:28:57 -- nvmf/common.sh@297 -- # x722=() 00:23:43.624 21:28:57 -- nvmf/common.sh@297 -- # local -ga x722 00:23:43.624 21:28:57 -- nvmf/common.sh@298 -- # mlx=() 00:23:43.624 21:28:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:43.624 21:28:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.624 21:28:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.624 21:28:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.624 21:28:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.624 21:28:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.624 21:28:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.624 21:28:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.624 21:28:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.624 21:28:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.624 21:28:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.624 21:28:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.624 21:28:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:43.624 21:28:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:43.624 21:28:57 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:43.624 21:28:57 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:43.624 21:28:57 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:43.624 21:28:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:43.624 21:28:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.624 21:28:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:43.624 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:43.625 21:28:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.625 21:28:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.625 21:28:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.625 21:28:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.625 21:28:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.625 21:28:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.625 21:28:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:43.625 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:43.625 21:28:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.625 21:28:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.625 21:28:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.625 21:28:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.625 21:28:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.625 21:28:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:43.625 21:28:57 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:43.625 21:28:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.625 21:28:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.625 21:28:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:43.625 21:28:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.625 21:28:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:43.625 Found net devices under 0000:27:00.0: cvl_0_0 00:23:43.625 21:28:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.625 21:28:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.625 21:28:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.625 21:28:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:43.625 21:28:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.625 21:28:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:43.625 Found net devices under 0000:27:00.1: cvl_0_1 00:23:43.625 21:28:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.625 21:28:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:43.625 21:28:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:43.625 21:28:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:43.625 21:28:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:43.625 21:28:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:43.625 21:28:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.625 21:28:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.625 21:28:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.625 21:28:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:43.625 21:28:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.625 21:28:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.625 21:28:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:43.625 21:28:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.625 21:28:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.625 21:28:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:43.625 21:28:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:43.625 21:28:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.625 21:28:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.625 21:28:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.625 21:28:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.625 21:28:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:43.625 21:28:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.625 21:28:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.625 21:28:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.625 21:28:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:43.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:23:43.625 00:23:43.625 --- 10.0.0.2 ping statistics --- 00:23:43.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.625 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:23:43.625 21:28:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:23:43.625 00:23:43.625 --- 10.0.0.1 ping statistics --- 00:23:43.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.625 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:23:43.625 21:28:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.625 21:28:58 -- nvmf/common.sh@411 -- # return 0 00:23:43.625 21:28:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:43.625 21:28:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.625 21:28:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:43.625 21:28:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:43.625 21:28:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.625 21:28:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:43.625 21:28:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:43.625 21:28:58 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:43.625 21:28:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:43.625 21:28:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:43.625 21:28:58 -- common/autotest_common.sh@10 -- # set +x 00:23:43.625 21:28:58 -- nvmf/common.sh@470 -- # nvmfpid=1316151 00:23:43.625 21:28:58 -- nvmf/common.sh@471 -- # waitforlisten 1316151 00:23:43.625 21:28:58 -- common/autotest_common.sh@817 -- # '[' -z 1316151 ']' 00:23:43.625 21:28:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.625 21:28:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:43.625 21:28:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.625 21:28:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:43.625 21:28:58 -- common/autotest_common.sh@10 -- # set +x 00:23:43.625 21:28:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:43.625 [2024-04-24 21:28:58.240555] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:23:43.625 [2024-04-24 21:28:58.240660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.625 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.625 [2024-04-24 21:28:58.371425] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.625 [2024-04-24 21:28:58.463629] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.625 [2024-04-24 21:28:58.463663] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.625 [2024-04-24 21:28:58.463673] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.625 [2024-04-24 21:28:58.463683] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.625 [2024-04-24 21:28:58.463690] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.625 [2024-04-24 21:28:58.463721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.198 21:28:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:44.198 21:28:58 -- common/autotest_common.sh@850 -- # return 0 00:23:44.198 21:28:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:44.198 21:28:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:44.198 21:28:58 -- common/autotest_common.sh@10 -- # set +x 00:23:44.198 21:28:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.198 21:28:58 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.198 21:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:44.198 21:28:58 -- common/autotest_common.sh@10 -- # set +x 00:23:44.198 [2024-04-24 21:28:58.983136] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.198 21:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:44.198 21:28:58 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:44.198 21:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:44.198 21:28:58 -- common/autotest_common.sh@10 -- # set +x 00:23:44.198 [2024-04-24 21:28:58.991300] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:44.198 21:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:44.198 21:28:58 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:44.198 21:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:44.198 21:28:58 -- common/autotest_common.sh@10 -- # set +x 00:23:44.198 null0 00:23:44.198 21:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:44.198 21:28:59 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:44.198 21:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:44.198 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:23:44.198 null1 00:23:44.198 21:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:44.198 21:28:59 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:44.198 21:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:44.198 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:23:44.198 21:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:44.198 21:28:59 -- host/discovery.sh@45 -- # hostpid=1316338 00:23:44.198 21:28:59 -- host/discovery.sh@46 -- # waitforlisten 1316338 /tmp/host.sock 00:23:44.198 21:28:59 -- common/autotest_common.sh@817 -- # '[' -z 1316338 ']' 00:23:44.198 21:28:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:44.198 21:28:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:44.198 21:28:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:44.198 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:44.198 21:28:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:44.198 21:28:59 -- host/discovery.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:44.198 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:23:44.198 [2024-04-24 21:28:59.097037] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:23:44.198 [2024-04-24 21:28:59.097143] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1316338 ] 00:23:44.460 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.460 [2024-04-24 21:28:59.213214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.460 [2024-04-24 21:28:59.307671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.031 21:28:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:45.031 21:28:59 -- common/autotest_common.sh@850 -- # return 0 00:23:45.031 21:28:59 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:45.031 21:28:59 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:45.031 21:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.031 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:23:45.031 21:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.031 21:28:59 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:45.031 21:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.031 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:23:45.031 21:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.031 21:28:59 -- host/discovery.sh@72 -- # notify_id=0 00:23:45.031 21:28:59 -- host/discovery.sh@83 -- # get_subsystem_names 00:23:45.031 21:28:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:45.031 21:28:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:45.031 21:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.031 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:23:45.031 21:28:59 -- host/discovery.sh@59 -- # xargs 00:23:45.031 21:28:59 -- host/discovery.sh@59 -- # sort 00:23:45.031 21:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.031 21:28:59 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:45.031 21:28:59 -- host/discovery.sh@84 -- # get_bdev_list 00:23:45.031 21:28:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.031 21:28:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:45.031 21:28:59 -- host/discovery.sh@55 -- # xargs 00:23:45.031 21:28:59 -- host/discovery.sh@55 -- # sort 00:23:45.031 21:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.031 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:23:45.031 21:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.031 21:28:59 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:45.031 21:28:59 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:45.031 21:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.031 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:23:45.031 21:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.031 21:28:59 -- host/discovery.sh@87 -- # get_subsystem_names 00:23:45.031 21:28:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:45.031 21:28:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:45.031 21:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.031 21:28:59 -- host/discovery.sh@59 -- # sort 00:23:45.031 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:23:45.031 21:28:59 -- host/discovery.sh@59 -- # xargs 00:23:45.031 21:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.031 21:28:59 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:45.031 21:28:59 -- host/discovery.sh@88 -- # get_bdev_list 00:23:45.031 21:28:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.031 21:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.031 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:23:45.031 21:28:59 -- host/discovery.sh@55 -- # xargs 00:23:45.031 21:28:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:45.031 21:28:59 -- host/discovery.sh@55 -- # sort 00:23:45.031 21:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.031 21:28:59 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:45.031 21:28:59 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:45.031 21:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.031 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:23:45.031 21:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.031 21:28:59 -- host/discovery.sh@91 -- # get_subsystem_names 00:23:45.031 21:28:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:45.031 21:28:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:45.031 21:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.031 21:28:59 -- host/discovery.sh@59 -- # sort 00:23:45.031 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:23:45.031 21:28:59 -- host/discovery.sh@59 -- # xargs 00:23:45.293 21:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.293 21:29:00 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:45.293 21:29:00 -- host/discovery.sh@92 -- # get_bdev_list 00:23:45.293 21:29:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.293 21:29:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:45.293 21:29:00 -- host/discovery.sh@55 -- # xargs 00:23:45.293 21:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.293 21:29:00 -- common/autotest_common.sh@10 -- # set +x 00:23:45.293 21:29:00 -- host/discovery.sh@55 -- # sort 00:23:45.293 21:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.293 21:29:00 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:45.293 21:29:00 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:45.293 21:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.293 21:29:00 -- common/autotest_common.sh@10 -- # set +x 00:23:45.293 [2024-04-24 21:29:00.079542] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.293 21:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.293 21:29:00 -- host/discovery.sh@97 -- # get_subsystem_names 00:23:45.293 21:29:00 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:45.293 21:29:00 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:45.293 21:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.293 21:29:00 -- host/discovery.sh@59 -- # sort 00:23:45.293 21:29:00 -- host/discovery.sh@59 -- # xargs 00:23:45.293 21:29:00 -- common/autotest_common.sh@10 -- # set +x 00:23:45.293 21:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.293 21:29:00 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:45.293 21:29:00 -- host/discovery.sh@98 -- # get_bdev_list 00:23:45.293 21:29:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.293 21:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.293 21:29:00 -- common/autotest_common.sh@10 -- # set +x 00:23:45.293 21:29:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:45.293 21:29:00 -- host/discovery.sh@55 -- # sort 00:23:45.293 21:29:00 -- host/discovery.sh@55 -- # xargs 00:23:45.293 21:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.293 21:29:00 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:45.293 21:29:00 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:45.293 21:29:00 -- host/discovery.sh@79 -- # expected_count=0 00:23:45.293 21:29:00 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:45.293 21:29:00 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:45.293 21:29:00 -- common/autotest_common.sh@901 -- # local max=10 00:23:45.293 21:29:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:45.293 21:29:00 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:45.293 21:29:00 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:45.293 21:29:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:45.293 21:29:00 -- host/discovery.sh@74 -- # jq '. | length' 00:23:45.293 21:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.293 21:29:00 -- common/autotest_common.sh@10 -- # set +x 00:23:45.293 21:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.293 21:29:00 -- host/discovery.sh@74 -- # notification_count=0 00:23:45.293 21:29:00 -- host/discovery.sh@75 -- # notify_id=0 00:23:45.293 21:29:00 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:45.293 21:29:00 -- common/autotest_common.sh@904 -- # return 0 00:23:45.293 21:29:00 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:45.293 21:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.293 21:29:00 -- common/autotest_common.sh@10 -- # set +x 00:23:45.293 21:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.293 21:29:00 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:45.294 21:29:00 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:45.294 21:29:00 -- common/autotest_common.sh@901 -- # local max=10 00:23:45.294 21:29:00 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:45.294 21:29:00 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:45.294 21:29:00 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:45.294 21:29:00 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:45.294 21:29:00 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:45.294 21:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.294 21:29:00 -- host/discovery.sh@59 -- # sort 00:23:45.294 21:29:00 -- common/autotest_common.sh@10 -- # set +x 00:23:45.294 21:29:00 -- host/discovery.sh@59 -- # xargs 00:23:45.294 21:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.294 21:29:00 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:23:45.294 21:29:00 -- common/autotest_common.sh@906 -- # sleep 1 00:23:45.947 [2024-04-24 21:29:00.858678] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:45.947 [2024-04-24 21:29:00.858713] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:45.947 [2024-04-24 21:29:00.858740] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:46.209 [2024-04-24 21:29:00.946787] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:46.209 [2024-04-24 21:29:01.171160] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:46.209 [2024-04-24 21:29:01.171192] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:46.471 21:29:01 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:46.471 21:29:01 -- host/discovery.sh@59 -- # sort 00:23:46.471 21:29:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:46.471 21:29:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.471 21:29:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:46.471 21:29:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.471 21:29:01 -- host/discovery.sh@59 -- # xargs 00:23:46.471 21:29:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.471 21:29:01 -- common/autotest_common.sh@904 -- # return 0 00:23:46.471 21:29:01 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:46.471 21:29:01 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:46.471 21:29:01 -- common/autotest_common.sh@901 -- # local max=10 00:23:46.471 21:29:01 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:46.471 21:29:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:46.471 21:29:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.471 21:29:01 -- host/discovery.sh@55 -- # sort 00:23:46.471 21:29:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.471 21:29:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.471 21:29:01 -- host/discovery.sh@55 -- # xargs 00:23:46.471 21:29:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:46.471 21:29:01 -- common/autotest_common.sh@904 -- # return 0 00:23:46.471 21:29:01 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:46.471 21:29:01 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:46.471 21:29:01 -- common/autotest_common.sh@901 -- # local max=10 00:23:46.471 21:29:01 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:46.471 21:29:01 -- host/discovery.sh@63 -- # sort -n 00:23:46.471 21:29:01 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:46.471 21:29:01 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:46.471 21:29:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.471 21:29:01 -- host/discovery.sh@63 -- # xargs 00:23:46.471 21:29:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.471 21:29:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:23:46.471 21:29:01 -- common/autotest_common.sh@904 -- # return 0 00:23:46.471 21:29:01 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:46.471 21:29:01 -- host/discovery.sh@79 -- # expected_count=1 00:23:46.471 21:29:01 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:46.471 21:29:01 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:46.471 21:29:01 -- common/autotest_common.sh@901 -- # local max=10 00:23:46.471 21:29:01 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:46.471 21:29:01 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:46.471 21:29:01 -- host/discovery.sh@74 -- # jq '. | length' 00:23:46.471 21:29:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.471 21:29:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.471 21:29:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.471 21:29:01 -- host/discovery.sh@74 -- # notification_count=1 00:23:46.471 21:29:01 -- host/discovery.sh@75 -- # notify_id=1 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:46.471 21:29:01 -- common/autotest_common.sh@904 -- # return 0 00:23:46.471 21:29:01 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:46.471 21:29:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.471 21:29:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.471 21:29:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.471 21:29:01 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:46.471 21:29:01 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:46.471 21:29:01 -- common/autotest_common.sh@901 -- # local max=10 00:23:46.471 21:29:01 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:46.471 21:29:01 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:46.733 21:29:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.734 21:29:01 -- host/discovery.sh@55 -- # sort 00:23:46.734 21:29:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:46.734 21:29:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.734 21:29:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.734 21:29:01 -- host/discovery.sh@55 -- # xargs 00:23:46.734 21:29:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.734 21:29:01 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:46.734 21:29:01 -- common/autotest_common.sh@904 -- # return 0 00:23:46.734 21:29:01 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:46.734 21:29:01 -- host/discovery.sh@79 -- # expected_count=1 00:23:46.734 21:29:01 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:46.734 21:29:01 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:46.734 21:29:01 -- common/autotest_common.sh@901 -- # local max=10 00:23:46.734 21:29:01 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:46.734 21:29:01 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:46.734 21:29:01 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:46.734 21:29:01 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:46.734 21:29:01 -- host/discovery.sh@74 -- # jq '. | length' 00:23:46.734 21:29:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.734 21:29:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.734 21:29:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.734 21:29:01 -- host/discovery.sh@74 -- # notification_count=1 00:23:46.734 21:29:01 -- host/discovery.sh@75 -- # notify_id=2 00:23:46.734 21:29:01 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:46.734 21:29:01 -- common/autotest_common.sh@904 -- # return 0 00:23:46.734 21:29:01 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:46.734 21:29:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.734 21:29:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.734 [2024-04-24 21:29:01.500224] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:46.734 [2024-04-24 21:29:01.501412] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:46.734 [2024-04-24 21:29:01.501459] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:46.734 21:29:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.734 21:29:01 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:46.734 21:29:01 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:46.734 21:29:01 -- common/autotest_common.sh@901 -- # local max=10 00:23:46.734 21:29:01 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:46.734 21:29:01 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:46.734 21:29:01 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:46.734 21:29:01 -- host/discovery.sh@59 -- # sort 00:23:46.734 21:29:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:46.734 21:29:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.734 21:29:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:46.734 21:29:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.734 21:29:01 -- host/discovery.sh@59 -- # xargs 00:23:46.734 21:29:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.734 21:29:01 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.734 21:29:01 -- common/autotest_common.sh@904 -- # return 0 00:23:46.734 21:29:01 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:46.734 21:29:01 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:46.734 21:29:01 -- common/autotest_common.sh@901 -- # local max=10 00:23:46.734 21:29:01 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:46.734 21:29:01 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:46.734 21:29:01 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:46.734 21:29:01 -- host/discovery.sh@55 -- # sort 00:23:46.734 21:29:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.734 21:29:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:46.734 21:29:01 -- host/discovery.sh@55 -- # xargs 00:23:46.734 21:29:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.734 21:29:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.734 21:29:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.734 [2024-04-24 21:29:01.590503] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:46.734 21:29:01 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:46.734 21:29:01 -- common/autotest_common.sh@904 -- # return 0 00:23:46.734 21:29:01 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:46.734 21:29:01 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:46.734 21:29:01 -- common/autotest_common.sh@901 -- # local max=10 00:23:46.734 21:29:01 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:46.734 21:29:01 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:46.734 21:29:01 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:46.734 21:29:01 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:46.734 21:29:01 -- host/discovery.sh@63 -- # xargs 00:23:46.734 21:29:01 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:46.734 21:29:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.734 21:29:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.734 21:29:01 -- host/discovery.sh@63 -- # sort -n 00:23:46.734 21:29:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.734 21:29:01 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:46.734 21:29:01 -- common/autotest_common.sh@906 -- # sleep 1 00:23:46.734 [2024-04-24 21:29:01.690132] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:46.734 [2024-04-24 21:29:01.690165] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:46.734 [2024-04-24 21:29:01.690176] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:48.116 21:29:02 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:48.116 21:29:02 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:48.116 21:29:02 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:48.116 21:29:02 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:48.116 21:29:02 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:48.116 21:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.116 21:29:02 -- common/autotest_common.sh@10 -- # set +x 00:23:48.116 21:29:02 -- host/discovery.sh@63 -- # sort -n 00:23:48.116 21:29:02 -- host/discovery.sh@63 -- # xargs 00:23:48.116 21:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.116 21:29:02 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:48.116 21:29:02 -- common/autotest_common.sh@904 -- # return 0 00:23:48.116 21:29:02 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:48.116 21:29:02 -- host/discovery.sh@79 -- # expected_count=0 00:23:48.116 21:29:02 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:48.116 21:29:02 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:48.116 21:29:02 -- common/autotest_common.sh@901 -- # local max=10 00:23:48.117 21:29:02 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:48.117 21:29:02 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:48.117 21:29:02 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:48.117 21:29:02 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:48.117 21:29:02 -- host/discovery.sh@74 -- # jq '. | length' 00:23:48.117 21:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.117 21:29:02 -- common/autotest_common.sh@10 -- # set +x 00:23:48.117 21:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.117 21:29:02 -- host/discovery.sh@74 -- # notification_count=0 00:23:48.117 21:29:02 -- host/discovery.sh@75 -- # notify_id=2 00:23:48.117 21:29:02 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:48.117 21:29:02 -- common/autotest_common.sh@904 -- # return 0 00:23:48.117 21:29:02 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:48.117 21:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.117 21:29:02 -- common/autotest_common.sh@10 -- # set +x 00:23:48.117 [2024-04-24 21:29:02.729522] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:48.117 [2024-04-24 21:29:02.729554] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:48.117 21:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.117 21:29:02 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:48.117 21:29:02 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:48.117 21:29:02 -- common/autotest_common.sh@901 -- # local max=10 00:23:48.117 21:29:02 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:48.117 21:29:02 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:48.117 21:29:02 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:48.117 21:29:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:48.117 21:29:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:48.117 21:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.117 [2024-04-24 21:29:02.738692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.117 [2024-04-24 21:29:02.738726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.117 [2024-04-24 21:29:02.738741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.117 [2024-04-24 21:29:02.738751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.117 [2024-04-24 21:29:02.738761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.117 [2024-04-24 21:29:02.738769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.117 [2024-04-24 21:29:02.738778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.117 21:29:02 -- host/discovery.sh@59 -- # xargs 00:23:48.117 [2024-04-24 21:29:02.738792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.117 [2024-04-24 21:29:02.738804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:48.117 21:29:02 -- host/discovery.sh@59 -- # sort 00:23:48.117 21:29:02 -- common/autotest_common.sh@10 -- # set +x 00:23:48.117 21:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.117 [2024-04-24 21:29:02.748677] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:48.117 [2024-04-24 21:29:02.758691] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.117 [2024-04-24 21:29:02.759110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.117 [2024-04-24 21:29:02.759277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.117 [2024-04-24 21:29:02.759293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:48.117 [2024-04-24 21:29:02.759304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:48.117 [2024-04-24 21:29:02.759320] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:48.117 [2024-04-24 21:29:02.759333] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.117 [2024-04-24 21:29:02.759343] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.117 [2024-04-24 21:29:02.759354] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.117 [2024-04-24 21:29:02.759372] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.117 [2024-04-24 21:29:02.768739] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.117 [2024-04-24 21:29:02.769090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.117 [2024-04-24 21:29:02.769225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.117 [2024-04-24 21:29:02.769238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:48.117 [2024-04-24 21:29:02.769247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:48.117 [2024-04-24 21:29:02.769260] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:48.117 [2024-04-24 21:29:02.769283] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.117 [2024-04-24 21:29:02.769290] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.117 [2024-04-24 21:29:02.769299] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.117 [2024-04-24 21:29:02.769310] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.117 21:29:02 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.117 21:29:02 -- common/autotest_common.sh@904 -- # return 0 00:23:48.117 21:29:02 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:48.117 21:29:02 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:48.117 21:29:02 -- common/autotest_common.sh@901 -- # local max=10 00:23:48.117 21:29:02 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:48.117 21:29:02 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:48.117 21:29:02 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:48.117 [2024-04-24 21:29:02.778779] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.117 [2024-04-24 21:29:02.778957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.117 [2024-04-24 21:29:02.779234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.117 [2024-04-24 21:29:02.779245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:48.117 [2024-04-24 21:29:02.779255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:48.117 [2024-04-24 21:29:02.779275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:48.117 [2024-04-24 21:29:02.779291] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.117 [2024-04-24 21:29:02.779298] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.117 [2024-04-24 21:29:02.779307] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.117 [2024-04-24 21:29:02.779319] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.117 21:29:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:48.117 21:29:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:48.117 21:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.117 21:29:02 -- host/discovery.sh@55 -- # sort 00:23:48.117 21:29:02 -- common/autotest_common.sh@10 -- # set +x 00:23:48.117 21:29:02 -- host/discovery.sh@55 -- # xargs 00:23:48.117 [2024-04-24 21:29:02.788828] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.117 [2024-04-24 21:29:02.789114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.117 [2024-04-24 21:29:02.789494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.117 [2024-04-24 21:29:02.789505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:48.117 [2024-04-24 21:29:02.789514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:48.117 [2024-04-24 21:29:02.789528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:48.117 [2024-04-24 21:29:02.789565] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.117 [2024-04-24 21:29:02.789573] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.117 [2024-04-24 21:29:02.789581] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.117 [2024-04-24 21:29:02.789594] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.117 21:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.117 [2024-04-24 21:29:02.798868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.117 [2024-04-24 21:29:02.799148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.117 [2024-04-24 21:29:02.799449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.117 [2024-04-24 21:29:02.799459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:48.117 [2024-04-24 21:29:02.799468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:48.117 [2024-04-24 21:29:02.799481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:48.117 [2024-04-24 21:29:02.799492] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.117 [2024-04-24 21:29:02.799499] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.117 [2024-04-24 21:29:02.799507] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.118 [2024-04-24 21:29:02.799519] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.118 [2024-04-24 21:29:02.808907] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.118 [2024-04-24 21:29:02.809099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.118 [2024-04-24 21:29:02.809360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.118 [2024-04-24 21:29:02.809370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:48.118 [2024-04-24 21:29:02.809383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:48.118 [2024-04-24 21:29:02.809395] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:48.118 [2024-04-24 21:29:02.809406] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.118 [2024-04-24 21:29:02.809414] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.118 [2024-04-24 21:29:02.809422] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.118 [2024-04-24 21:29:02.809433] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.118 21:29:02 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:48.118 21:29:02 -- common/autotest_common.sh@904 -- # return 0 00:23:48.118 21:29:02 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:48.118 21:29:02 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:48.118 21:29:02 -- common/autotest_common.sh@901 -- # local max=10 00:23:48.118 21:29:02 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:48.118 [2024-04-24 21:29:02.818947] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.118 21:29:02 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:48.118 [2024-04-24 21:29:02.819122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.118 [2024-04-24 21:29:02.819547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.118 [2024-04-24 21:29:02.819566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:48.118 [2024-04-24 21:29:02.819578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:48.118 [2024-04-24 21:29:02.819597] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:48.118 [2024-04-24 21:29:02.819619] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.118 [2024-04-24 21:29:02.819627] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.118 [2024-04-24 21:29:02.819638] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.118 [2024-04-24 21:29:02.819652] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.118 21:29:02 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:48.118 21:29:02 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:48.118 21:29:02 -- host/discovery.sh@63 -- # xargs 00:23:48.118 21:29:02 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:48.118 21:29:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.118 21:29:02 -- common/autotest_common.sh@10 -- # set +x 00:23:48.118 21:29:02 -- host/discovery.sh@63 -- # sort -n 00:23:48.118 [2024-04-24 21:29:02.828993] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.118 [2024-04-24 21:29:02.829280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.118 [2024-04-24 21:29:02.829550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.118 [2024-04-24 21:29:02.829561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:48.118 [2024-04-24 21:29:02.829571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:48.118 [2024-04-24 21:29:02.829585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:48.118 [2024-04-24 21:29:02.829782] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.118 [2024-04-24 21:29:02.829791] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.118 [2024-04-24 21:29:02.829800] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.118 [2024-04-24 21:29:02.829819] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.118 21:29:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.118 [2024-04-24 21:29:02.839036] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.118 [2024-04-24 21:29:02.839464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.118 [2024-04-24 21:29:02.839748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.118 [2024-04-24 21:29:02.839760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:48.118 [2024-04-24 21:29:02.839769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:48.118 [2024-04-24 21:29:02.839782] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:48.118 [2024-04-24 21:29:02.839793] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.118 [2024-04-24 21:29:02.839800] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.118 [2024-04-24 21:29:02.839808] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.118 [2024-04-24 21:29:02.839826] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.118 [2024-04-24 21:29:02.849071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:48.118 [2024-04-24 21:29:02.849371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.118 [2024-04-24 21:29:02.849562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.118 [2024-04-24 21:29:02.849572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:23:48.118 [2024-04-24 21:29:02.849582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:23:48.118 [2024-04-24 21:29:02.849595] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:23:48.118 [2024-04-24 21:29:02.849613] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:48.118 [2024-04-24 21:29:02.849620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:48.118 [2024-04-24 21:29:02.849629] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:48.118 [2024-04-24 21:29:02.849642] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.118 [2024-04-24 21:29:02.858851] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:48.118 [2024-04-24 21:29:02.858880] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:48.118 21:29:02 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:23:48.118 21:29:02 -- common/autotest_common.sh@906 -- # sleep 1 00:23:49.054 21:29:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:49.054 21:29:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:49.054 21:29:03 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:49.054 21:29:03 -- host/discovery.sh@63 -- # xargs 00:23:49.054 21:29:03 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:49.054 21:29:03 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:49.054 21:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.054 21:29:03 -- host/discovery.sh@63 -- # sort -n 00:23:49.054 21:29:03 -- common/autotest_common.sh@10 -- # set +x 00:23:49.054 21:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.054 21:29:03 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:23:49.054 21:29:03 -- common/autotest_common.sh@904 -- # return 0 00:23:49.054 21:29:03 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:49.054 21:29:03 -- host/discovery.sh@79 -- # expected_count=0 00:23:49.054 21:29:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:49.054 21:29:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:49.054 21:29:03 -- common/autotest_common.sh@901 -- # local max=10 00:23:49.054 21:29:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:49.054 21:29:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:49.054 21:29:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:49.054 21:29:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:49.054 21:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.054 21:29:03 -- common/autotest_common.sh@10 -- # set +x 00:23:49.054 21:29:03 -- host/discovery.sh@74 -- # jq '. | length' 00:23:49.054 21:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.054 21:29:03 -- host/discovery.sh@74 -- # notification_count=0 00:23:49.054 21:29:03 -- host/discovery.sh@75 -- # notify_id=2 00:23:49.054 21:29:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:49.054 21:29:03 -- common/autotest_common.sh@904 -- # return 0 00:23:49.054 21:29:03 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:49.054 21:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.054 21:29:03 -- common/autotest_common.sh@10 -- # set +x 00:23:49.054 21:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.054 21:29:03 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:49.054 21:29:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:49.054 21:29:03 -- common/autotest_common.sh@901 -- # local max=10 00:23:49.054 21:29:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:49.054 21:29:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:49.054 21:29:03 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:49.054 21:29:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:49.054 21:29:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:49.054 21:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.054 21:29:03 -- host/discovery.sh@59 -- # sort 00:23:49.054 21:29:03 -- host/discovery.sh@59 -- # xargs 00:23:49.054 21:29:03 -- common/autotest_common.sh@10 -- # set +x 00:23:49.054 21:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.054 21:29:03 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:49.054 21:29:03 -- common/autotest_common.sh@904 -- # return 0 00:23:49.054 21:29:03 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:49.054 21:29:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:49.054 21:29:03 -- common/autotest_common.sh@901 -- # local max=10 00:23:49.054 21:29:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:49.054 21:29:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:49.054 21:29:03 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:49.054 21:29:04 -- host/discovery.sh@55 -- # xargs 00:23:49.054 21:29:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:49.054 21:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.054 21:29:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:49.054 21:29:04 -- common/autotest_common.sh@10 -- # set +x 00:23:49.054 21:29:04 -- host/discovery.sh@55 -- # sort 00:23:49.054 21:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.316 21:29:04 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:49.316 21:29:04 -- common/autotest_common.sh@904 -- # return 0 00:23:49.316 21:29:04 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:49.316 21:29:04 -- host/discovery.sh@79 -- # expected_count=2 00:23:49.316 21:29:04 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:49.316 21:29:04 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:49.316 21:29:04 -- common/autotest_common.sh@901 -- # local max=10 00:23:49.316 21:29:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:49.316 21:29:04 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:49.316 21:29:04 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:49.316 21:29:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:49.316 21:29:04 -- host/discovery.sh@74 -- # jq '. | length' 00:23:49.316 21:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.316 21:29:04 -- common/autotest_common.sh@10 -- # set +x 00:23:49.316 21:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.316 21:29:04 -- host/discovery.sh@74 -- # notification_count=2 00:23:49.316 21:29:04 -- host/discovery.sh@75 -- # notify_id=4 00:23:49.316 21:29:04 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:49.316 21:29:04 -- common/autotest_common.sh@904 -- # return 0 00:23:49.316 21:29:04 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:49.316 21:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.316 21:29:04 -- common/autotest_common.sh@10 -- # set +x 00:23:50.254 [2024-04-24 21:29:05.129381] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:50.254 [2024-04-24 21:29:05.129407] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:50.254 [2024-04-24 21:29:05.129424] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:50.512 [2024-04-24 21:29:05.260518] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:50.770 [2024-04-24 21:29:05.526916] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:50.770 [2024-04-24 21:29:05.526955] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:50.770 21:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.770 21:29:05 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:50.770 21:29:05 -- common/autotest_common.sh@638 -- # local es=0 00:23:50.770 21:29:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:50.770 21:29:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:50.770 21:29:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:50.770 21:29:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:50.770 21:29:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:50.770 21:29:05 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:50.770 21:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.770 21:29:05 -- common/autotest_common.sh@10 -- # set +x 00:23:50.770 request: 00:23:50.770 { 00:23:50.770 "name": "nvme", 00:23:50.770 "trtype": "tcp", 00:23:50.770 "traddr": "10.0.0.2", 00:23:50.770 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:50.770 "adrfam": "ipv4", 00:23:50.770 "trsvcid": "8009", 00:23:50.770 "wait_for_attach": true, 00:23:50.770 "method": "bdev_nvme_start_discovery", 00:23:50.770 "req_id": 1 00:23:50.770 } 00:23:50.770 Got JSON-RPC error response 00:23:50.770 response: 00:23:50.770 { 00:23:50.770 "code": -17, 00:23:50.770 "message": "File exists" 00:23:50.770 } 00:23:50.770 21:29:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:50.770 21:29:05 -- common/autotest_common.sh@641 -- # es=1 00:23:50.770 21:29:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:50.770 21:29:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:50.770 21:29:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:50.770 21:29:05 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:50.770 21:29:05 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:50.770 21:29:05 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:50.770 21:29:05 -- host/discovery.sh@67 -- # sort 00:23:50.770 21:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.770 21:29:05 -- host/discovery.sh@67 -- # xargs 00:23:50.770 21:29:05 -- common/autotest_common.sh@10 -- # set +x 00:23:50.770 21:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.770 21:29:05 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:50.770 21:29:05 -- host/discovery.sh@146 -- # get_bdev_list 00:23:50.770 21:29:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.770 21:29:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:50.770 21:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.770 21:29:05 -- host/discovery.sh@55 -- # sort 00:23:50.770 21:29:05 -- common/autotest_common.sh@10 -- # set +x 00:23:50.770 21:29:05 -- host/discovery.sh@55 -- # xargs 00:23:50.770 21:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.770 21:29:05 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:50.770 21:29:05 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:50.770 21:29:05 -- common/autotest_common.sh@638 -- # local es=0 00:23:50.770 21:29:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:50.770 21:29:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:50.770 21:29:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:50.771 21:29:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:50.771 21:29:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:50.771 21:29:05 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:50.771 21:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.771 21:29:05 -- common/autotest_common.sh@10 -- # set +x 00:23:50.771 request: 00:23:50.771 { 00:23:50.771 "name": "nvme_second", 00:23:50.771 "trtype": "tcp", 00:23:50.771 "traddr": "10.0.0.2", 00:23:50.771 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:50.771 "adrfam": "ipv4", 00:23:50.771 "trsvcid": "8009", 00:23:50.771 "wait_for_attach": true, 00:23:50.771 "method": "bdev_nvme_start_discovery", 00:23:50.771 "req_id": 1 00:23:50.771 } 00:23:50.771 Got JSON-RPC error response 00:23:50.771 response: 00:23:50.771 { 00:23:50.771 "code": -17, 00:23:50.771 "message": "File exists" 00:23:50.771 } 00:23:50.771 21:29:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:50.771 21:29:05 -- common/autotest_common.sh@641 -- # es=1 00:23:50.771 21:29:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:50.771 21:29:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:50.771 21:29:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:50.771 21:29:05 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:50.771 21:29:05 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:50.771 21:29:05 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:50.771 21:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.771 21:29:05 -- host/discovery.sh@67 -- # sort 00:23:50.771 21:29:05 -- host/discovery.sh@67 -- # xargs 00:23:50.771 21:29:05 -- common/autotest_common.sh@10 -- # set +x 00:23:50.771 21:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.771 21:29:05 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:50.771 21:29:05 -- host/discovery.sh@152 -- # get_bdev_list 00:23:50.771 21:29:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.771 21:29:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:50.771 21:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.771 21:29:05 -- common/autotest_common.sh@10 -- # set +x 00:23:50.771 21:29:05 -- host/discovery.sh@55 -- # sort 00:23:50.771 21:29:05 -- host/discovery.sh@55 -- # xargs 00:23:50.771 21:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.771 21:29:05 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:50.771 21:29:05 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:50.771 21:29:05 -- common/autotest_common.sh@638 -- # local es=0 00:23:50.771 21:29:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:50.771 21:29:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:50.771 21:29:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:50.771 21:29:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:50.771 21:29:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:50.771 21:29:05 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:50.771 21:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.771 21:29:05 -- common/autotest_common.sh@10 -- # set +x 00:23:52.150 [2024-04-24 21:29:06.727636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.151 [2024-04-24 21:29:06.727925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:52.151 [2024-04-24 21:29:06.727943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000011840 with addr=10.0.0.2, port=8010 00:23:52.151 [2024-04-24 21:29:06.727974] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:52.151 [2024-04-24 21:29:06.727995] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:52.151 [2024-04-24 21:29:06.728005] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:53.085 [2024-04-24 21:29:07.727550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.085 [2024-04-24 21:29:07.727762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.085 [2024-04-24 21:29:07.727773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000011a40 with addr=10.0.0.2, port=8010 00:23:53.085 [2024-04-24 21:29:07.727802] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:53.085 [2024-04-24 21:29:07.727812] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:53.085 [2024-04-24 21:29:07.727821] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:54.027 [2024-04-24 21:29:08.727149] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:54.027 request: 00:23:54.027 { 00:23:54.027 "name": "nvme_second", 00:23:54.027 "trtype": "tcp", 00:23:54.027 "traddr": "10.0.0.2", 00:23:54.027 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:54.027 "adrfam": "ipv4", 00:23:54.027 "trsvcid": "8010", 00:23:54.027 "attach_timeout_ms": 3000, 00:23:54.027 "method": "bdev_nvme_start_discovery", 00:23:54.027 "req_id": 1 00:23:54.027 } 00:23:54.027 Got JSON-RPC error response 00:23:54.027 response: 00:23:54.027 { 00:23:54.027 "code": -110, 00:23:54.027 "message": "Connection timed out" 00:23:54.027 } 00:23:54.027 21:29:08 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:54.027 21:29:08 -- common/autotest_common.sh@641 -- # es=1 00:23:54.027 21:29:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:54.027 21:29:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:54.027 21:29:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:54.027 21:29:08 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:54.027 21:29:08 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:54.027 21:29:08 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:54.027 21:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.027 21:29:08 -- host/discovery.sh@67 -- # sort 00:23:54.027 21:29:08 -- common/autotest_common.sh@10 -- # set +x 00:23:54.027 21:29:08 -- host/discovery.sh@67 -- # xargs 00:23:54.027 21:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.027 21:29:08 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:54.027 21:29:08 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:54.027 21:29:08 -- host/discovery.sh@161 -- # kill 1316338 00:23:54.027 21:29:08 -- host/discovery.sh@162 -- # nvmftestfini 00:23:54.027 21:29:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:54.027 21:29:08 -- nvmf/common.sh@117 -- # sync 00:23:54.027 21:29:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.027 21:29:08 -- nvmf/common.sh@120 -- # set +e 00:23:54.027 21:29:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.027 21:29:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.027 rmmod nvme_tcp 00:23:54.027 rmmod nvme_fabrics 00:23:54.027 rmmod nvme_keyring 00:23:54.027 21:29:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.027 21:29:08 -- nvmf/common.sh@124 -- # set -e 00:23:54.027 21:29:08 -- nvmf/common.sh@125 -- # return 0 00:23:54.027 21:29:08 -- nvmf/common.sh@478 -- # '[' -n 1316151 ']' 00:23:54.027 21:29:08 -- nvmf/common.sh@479 -- # killprocess 1316151 00:23:54.027 21:29:08 -- common/autotest_common.sh@936 -- # '[' -z 1316151 ']' 00:23:54.027 21:29:08 -- common/autotest_common.sh@940 -- # kill -0 1316151 00:23:54.027 21:29:08 -- common/autotest_common.sh@941 -- # uname 00:23:54.027 21:29:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:54.027 21:29:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1316151 00:23:54.027 21:29:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:54.027 21:29:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:54.027 21:29:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1316151' 00:23:54.027 killing process with pid 1316151 00:23:54.027 21:29:08 -- common/autotest_common.sh@955 -- # kill 1316151 00:23:54.027 21:29:08 -- common/autotest_common.sh@960 -- # wait 1316151 00:23:54.597 21:29:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:54.597 21:29:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:54.597 21:29:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:54.597 21:29:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.597 21:29:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:54.597 21:29:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.597 21:29:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.597 21:29:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.502 21:29:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:56.502 00:23:56.502 real 0m19.782s 00:23:56.502 user 0m24.000s 00:23:56.502 sys 0m6.300s 00:23:56.502 21:29:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:56.502 21:29:11 -- common/autotest_common.sh@10 -- # set +x 00:23:56.502 ************************************ 00:23:56.502 END TEST nvmf_discovery 00:23:56.502 ************************************ 00:23:56.502 21:29:11 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:56.502 21:29:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:56.502 21:29:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:56.502 21:29:11 -- common/autotest_common.sh@10 -- # set +x 00:23:56.760 ************************************ 00:23:56.760 START TEST nvmf_discovery_remove_ifc 00:23:56.760 ************************************ 00:23:56.760 21:29:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:56.760 * Looking for test storage... 00:23:56.760 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:23:56.761 21:29:11 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.761 21:29:11 -- nvmf/common.sh@7 -- # uname -s 00:23:56.761 21:29:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.761 21:29:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.761 21:29:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.761 21:29:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.761 21:29:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.761 21:29:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.761 21:29:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.761 21:29:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.761 21:29:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.761 21:29:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.761 21:29:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:23:56.761 21:29:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:23:56.761 21:29:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.761 21:29:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.761 21:29:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:56.761 21:29:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.761 21:29:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:56.761 21:29:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.761 21:29:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.761 21:29:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.761 21:29:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.761 21:29:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.761 21:29:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.761 21:29:11 -- paths/export.sh@5 -- # export PATH 00:23:56.761 21:29:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.761 21:29:11 -- nvmf/common.sh@47 -- # : 0 00:23:56.761 21:29:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:56.761 21:29:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:56.761 21:29:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.761 21:29:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.761 21:29:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.761 21:29:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:56.761 21:29:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:56.761 21:29:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:56.761 21:29:11 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:56.761 21:29:11 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:56.761 21:29:11 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:56.761 21:29:11 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:56.761 21:29:11 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:56.761 21:29:11 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:56.761 21:29:11 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:56.761 21:29:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:56.761 21:29:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.761 21:29:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:56.761 21:29:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:56.761 21:29:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:56.761 21:29:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.761 21:29:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.761 21:29:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.761 21:29:11 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:23:56.761 21:29:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:56.761 21:29:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:56.761 21:29:11 -- common/autotest_common.sh@10 -- # set +x 00:24:02.035 21:29:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:02.035 21:29:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:02.035 21:29:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:02.035 21:29:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:02.035 21:29:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:02.035 21:29:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:02.035 21:29:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:02.035 21:29:16 -- nvmf/common.sh@295 -- # net_devs=() 00:24:02.035 21:29:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:02.035 21:29:16 -- nvmf/common.sh@296 -- # e810=() 00:24:02.035 21:29:16 -- nvmf/common.sh@296 -- # local -ga e810 00:24:02.035 21:29:16 -- nvmf/common.sh@297 -- # x722=() 00:24:02.035 21:29:16 -- nvmf/common.sh@297 -- # local -ga x722 00:24:02.035 21:29:16 -- nvmf/common.sh@298 -- # mlx=() 00:24:02.035 21:29:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:02.035 21:29:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.035 21:29:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.035 21:29:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.035 21:29:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.035 21:29:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.035 21:29:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.035 21:29:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.035 21:29:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.035 21:29:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.035 21:29:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.035 21:29:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.035 21:29:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:02.035 21:29:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:02.035 21:29:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.035 21:29:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:02.035 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:02.035 21:29:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.035 21:29:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:02.035 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:02.035 21:29:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:02.035 21:29:16 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:02.035 21:29:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.035 21:29:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.035 21:29:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:02.035 21:29:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.035 21:29:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:02.035 Found net devices under 0000:27:00.0: cvl_0_0 00:24:02.035 21:29:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.035 21:29:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.035 21:29:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.035 21:29:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:02.036 21:29:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.036 21:29:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:02.036 Found net devices under 0000:27:00.1: cvl_0_1 00:24:02.036 21:29:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.036 21:29:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:02.036 21:29:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:02.036 21:29:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:02.036 21:29:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:02.036 21:29:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:02.036 21:29:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.036 21:29:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.036 21:29:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.036 21:29:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:02.036 21:29:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.036 21:29:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.036 21:29:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:02.036 21:29:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.036 21:29:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.036 21:29:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:02.036 21:29:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:02.036 21:29:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.036 21:29:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.036 21:29:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.036 21:29:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.036 21:29:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:02.036 21:29:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.036 21:29:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.036 21:29:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.036 21:29:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:02.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:24:02.036 00:24:02.036 --- 10.0.0.2 ping statistics --- 00:24:02.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.036 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:24:02.036 21:29:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:24:02.036 00:24:02.036 --- 10.0.0.1 ping statistics --- 00:24:02.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.036 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:24:02.036 21:29:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.036 21:29:16 -- nvmf/common.sh@411 -- # return 0 00:24:02.036 21:29:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:02.036 21:29:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.036 21:29:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:02.036 21:29:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:02.036 21:29:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.036 21:29:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:02.036 21:29:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:02.036 21:29:16 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:02.036 21:29:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:02.036 21:29:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:02.036 21:29:16 -- common/autotest_common.sh@10 -- # set +x 00:24:02.036 21:29:16 -- nvmf/common.sh@470 -- # nvmfpid=1322274 00:24:02.036 21:29:16 -- nvmf/common.sh@471 -- # waitforlisten 1322274 00:24:02.036 21:29:16 -- common/autotest_common.sh@817 -- # '[' -z 1322274 ']' 00:24:02.036 21:29:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.036 21:29:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:02.036 21:29:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.036 21:29:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:02.036 21:29:16 -- common/autotest_common.sh@10 -- # set +x 00:24:02.036 21:29:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:02.295 [2024-04-24 21:29:17.029072] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:24:02.295 [2024-04-24 21:29:17.029183] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.295 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.295 [2024-04-24 21:29:17.153970] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.295 [2024-04-24 21:29:17.250991] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.295 [2024-04-24 21:29:17.251026] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.295 [2024-04-24 21:29:17.251035] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.295 [2024-04-24 21:29:17.251044] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.295 [2024-04-24 21:29:17.251052] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.295 [2024-04-24 21:29:17.251083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.874 21:29:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:02.874 21:29:17 -- common/autotest_common.sh@850 -- # return 0 00:24:02.874 21:29:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:02.874 21:29:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:02.874 21:29:17 -- common/autotest_common.sh@10 -- # set +x 00:24:02.874 21:29:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.874 21:29:17 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:02.874 21:29:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.874 21:29:17 -- common/autotest_common.sh@10 -- # set +x 00:24:02.874 [2024-04-24 21:29:17.759140] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.874 [2024-04-24 21:29:17.767301] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:02.874 null0 00:24:02.874 [2024-04-24 21:29:17.799222] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.874 21:29:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.874 21:29:17 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1322529 00:24:02.874 21:29:17 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1322529 /tmp/host.sock 00:24:02.874 21:29:17 -- common/autotest_common.sh@817 -- # '[' -z 1322529 ']' 00:24:02.874 21:29:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:24:02.874 21:29:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:02.874 21:29:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:02.874 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:02.874 21:29:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:02.874 21:29:17 -- common/autotest_common.sh@10 -- # set +x 00:24:02.874 21:29:17 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:03.133 [2024-04-24 21:29:17.894976] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:24:03.133 [2024-04-24 21:29:17.895087] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322529 ] 00:24:03.133 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.133 [2024-04-24 21:29:18.007749] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.394 [2024-04-24 21:29:18.102450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.653 21:29:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:03.653 21:29:18 -- common/autotest_common.sh@850 -- # return 0 00:24:03.653 21:29:18 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:03.653 21:29:18 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:03.654 21:29:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.654 21:29:18 -- common/autotest_common.sh@10 -- # set +x 00:24:03.654 21:29:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.654 21:29:18 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:03.654 21:29:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.654 21:29:18 -- common/autotest_common.sh@10 -- # set +x 00:24:03.913 21:29:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.913 21:29:18 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:03.913 21:29:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.913 21:29:18 -- common/autotest_common.sh@10 -- # set +x 00:24:04.849 [2024-04-24 21:29:19.804385] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:04.849 [2024-04-24 21:29:19.804417] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:04.849 [2024-04-24 21:29:19.804436] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:05.109 [2024-04-24 21:29:19.933533] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:05.109 [2024-04-24 21:29:19.993433] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:05.109 [2024-04-24 21:29:19.993494] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:05.109 [2024-04-24 21:29:19.993535] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:05.109 [2024-04-24 21:29:19.993556] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:05.109 [2024-04-24 21:29:19.993582] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:05.109 21:29:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.109 21:29:19 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:05.109 21:29:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:05.109 21:29:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:05.109 21:29:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:05.109 21:29:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:05.109 21:29:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:05.109 21:29:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.109 21:29:19 -- common/autotest_common.sh@10 -- # set +x 00:24:05.109 [2024-04-24 21:29:20.002308] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x614000006840 was disconnected and freed. delete nvme_qpair. 00:24:05.109 21:29:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.109 21:29:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:05.109 21:29:20 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:05.109 21:29:20 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:05.368 21:29:20 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:05.368 21:29:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:05.368 21:29:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:05.368 21:29:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:05.368 21:29:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:05.368 21:29:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.368 21:29:20 -- common/autotest_common.sh@10 -- # set +x 00:24:05.368 21:29:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:05.368 21:29:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.368 21:29:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:05.368 21:29:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:06.309 21:29:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:06.309 21:29:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:06.309 21:29:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.309 21:29:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:06.309 21:29:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.309 21:29:21 -- common/autotest_common.sh@10 -- # set +x 00:24:06.309 21:29:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:06.309 21:29:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.309 21:29:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:06.309 21:29:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:07.688 21:29:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:07.688 21:29:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:07.688 21:29:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:07.688 21:29:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.688 21:29:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:07.688 21:29:22 -- common/autotest_common.sh@10 -- # set +x 00:24:07.688 21:29:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:07.688 21:29:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.688 21:29:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:07.688 21:29:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:08.626 21:29:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:08.626 21:29:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:08.626 21:29:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.626 21:29:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:08.626 21:29:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.627 21:29:23 -- common/autotest_common.sh@10 -- # set +x 00:24:08.627 21:29:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:08.627 21:29:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.627 21:29:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:08.627 21:29:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:09.568 21:29:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:09.568 21:29:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.568 21:29:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.568 21:29:24 -- common/autotest_common.sh@10 -- # set +x 00:24:09.568 21:29:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:09.568 21:29:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:09.568 21:29:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:09.568 21:29:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.568 21:29:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:09.568 21:29:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:10.507 21:29:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:10.507 21:29:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.507 21:29:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:10.507 21:29:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.507 21:29:25 -- common/autotest_common.sh@10 -- # set +x 00:24:10.507 21:29:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:10.507 21:29:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:10.507 21:29:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.507 21:29:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:10.507 21:29:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:10.507 [2024-04-24 21:29:25.421699] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:10.507 [2024-04-24 21:29:25.421770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.507 [2024-04-24 21:29:25.421786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.507 [2024-04-24 21:29:25.421801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.507 [2024-04-24 21:29:25.421815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.507 [2024-04-24 21:29:25.421824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.507 [2024-04-24 21:29:25.421832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.507 [2024-04-24 21:29:25.421840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.507 [2024-04-24 21:29:25.421852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.507 [2024-04-24 21:29:25.421861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.507 [2024-04-24 21:29:25.421869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.507 [2024-04-24 21:29:25.421877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:24:10.507 [2024-04-24 21:29:25.431691] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:24:10.507 [2024-04-24 21:29:25.441711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:11.885 21:29:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.885 21:29:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.885 21:29:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.885 21:29:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.885 21:29:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.885 21:29:26 -- common/autotest_common.sh@10 -- # set +x 00:24:11.885 21:29:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.885 [2024-04-24 21:29:26.461347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:12.822 [2024-04-24 21:29:27.485298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:12.822 [2024-04-24 21:29:27.485367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005640 with addr=10.0.0.2, port=4420 00:24:12.822 [2024-04-24 21:29:27.485391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:24:12.822 [2024-04-24 21:29:27.486010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:24:12.822 [2024-04-24 21:29:27.486045] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.822 [2024-04-24 21:29:27.486092] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:12.822 [2024-04-24 21:29:27.486130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.822 [2024-04-24 21:29:27.486151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.822 [2024-04-24 21:29:27.486176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.822 [2024-04-24 21:29:27.486190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.822 [2024-04-24 21:29:27.486205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.822 [2024-04-24 21:29:27.486219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.823 [2024-04-24 21:29:27.486233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.823 [2024-04-24 21:29:27.486248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.823 [2024-04-24 21:29:27.486263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.823 [2024-04-24 21:29:27.486291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.823 [2024-04-24 21:29:27.486306] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:12.823 [2024-04-24 21:29:27.486431] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005240 (9): Bad file descriptor 00:24:12.823 [2024-04-24 21:29:27.487506] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:12.823 [2024-04-24 21:29:27.487525] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:12.823 21:29:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.823 21:29:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:12.823 21:29:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.763 21:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.763 21:29:28 -- common/autotest_common.sh@10 -- # set +x 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.763 21:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.763 21:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.763 21:29:28 -- common/autotest_common.sh@10 -- # set +x 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.763 21:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:13.763 21:29:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:14.699 [2024-04-24 21:29:29.536713] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:14.699 [2024-04-24 21:29:29.536739] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:14.699 [2024-04-24 21:29:29.536766] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:14.699 [2024-04-24 21:29:29.624819] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:14.958 21:29:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:14.958 21:29:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.958 21:29:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.958 21:29:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:14.958 21:29:29 -- common/autotest_common.sh@10 -- # set +x 00:24:14.958 21:29:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:14.958 21:29:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:14.958 21:29:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.958 21:29:29 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:14.958 21:29:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:14.958 [2024-04-24 21:29:29.853154] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:14.958 [2024-04-24 21:29:29.853213] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:14.958 [2024-04-24 21:29:29.853250] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:14.958 [2024-04-24 21:29:29.853280] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:14.958 [2024-04-24 21:29:29.853293] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:14.958 [2024-04-24 21:29:29.905195] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61400000a040 was disconnected and freed. delete nvme_qpair. 00:24:15.897 21:29:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.897 21:29:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.897 21:29:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.897 21:29:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.897 21:29:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.897 21:29:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.897 21:29:30 -- common/autotest_common.sh@10 -- # set +x 00:24:15.897 21:29:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.897 21:29:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:15.897 21:29:30 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:15.897 21:29:30 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1322529 00:24:15.897 21:29:30 -- common/autotest_common.sh@936 -- # '[' -z 1322529 ']' 00:24:15.897 21:29:30 -- common/autotest_common.sh@940 -- # kill -0 1322529 00:24:15.897 21:29:30 -- common/autotest_common.sh@941 -- # uname 00:24:15.897 21:29:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:15.897 21:29:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1322529 00:24:15.897 21:29:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:15.897 21:29:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:15.897 21:29:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1322529' 00:24:15.897 killing process with pid 1322529 00:24:15.897 21:29:30 -- common/autotest_common.sh@955 -- # kill 1322529 00:24:15.897 21:29:30 -- common/autotest_common.sh@960 -- # wait 1322529 00:24:16.463 21:29:31 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:16.464 21:29:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:16.464 21:29:31 -- nvmf/common.sh@117 -- # sync 00:24:16.464 21:29:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.464 21:29:31 -- nvmf/common.sh@120 -- # set +e 00:24:16.464 21:29:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.464 21:29:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.464 rmmod nvme_tcp 00:24:16.464 rmmod nvme_fabrics 00:24:16.464 rmmod nvme_keyring 00:24:16.464 21:29:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.464 21:29:31 -- nvmf/common.sh@124 -- # set -e 00:24:16.464 21:29:31 -- nvmf/common.sh@125 -- # return 0 00:24:16.464 21:29:31 -- nvmf/common.sh@478 -- # '[' -n 1322274 ']' 00:24:16.464 21:29:31 -- nvmf/common.sh@479 -- # killprocess 1322274 00:24:16.464 21:29:31 -- common/autotest_common.sh@936 -- # '[' -z 1322274 ']' 00:24:16.464 21:29:31 -- common/autotest_common.sh@940 -- # kill -0 1322274 00:24:16.464 21:29:31 -- common/autotest_common.sh@941 -- # uname 00:24:16.464 21:29:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:16.464 21:29:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1322274 00:24:16.464 21:29:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:16.464 21:29:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:16.464 21:29:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1322274' 00:24:16.464 killing process with pid 1322274 00:24:16.464 21:29:31 -- common/autotest_common.sh@955 -- # kill 1322274 00:24:16.464 21:29:31 -- common/autotest_common.sh@960 -- # wait 1322274 00:24:17.034 21:29:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:17.034 21:29:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:17.034 21:29:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:17.034 21:29:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.034 21:29:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.034 21:29:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.034 21:29:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.034 21:29:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.941 21:29:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:18.941 00:24:18.941 real 0m22.230s 00:24:18.941 user 0m27.676s 00:24:18.941 sys 0m5.222s 00:24:18.941 21:29:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:18.941 21:29:33 -- common/autotest_common.sh@10 -- # set +x 00:24:18.941 ************************************ 00:24:18.941 END TEST nvmf_discovery_remove_ifc 00:24:18.941 ************************************ 00:24:18.941 21:29:33 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:18.941 21:29:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:18.941 21:29:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:18.941 21:29:33 -- common/autotest_common.sh@10 -- # set +x 00:24:19.202 ************************************ 00:24:19.202 START TEST nvmf_identify_kernel_target 00:24:19.202 ************************************ 00:24:19.202 21:29:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:19.203 * Looking for test storage... 00:24:19.203 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:24:19.203 21:29:33 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.203 21:29:33 -- nvmf/common.sh@7 -- # uname -s 00:24:19.203 21:29:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.203 21:29:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.203 21:29:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.203 21:29:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.203 21:29:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.203 21:29:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.203 21:29:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.203 21:29:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.203 21:29:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.203 21:29:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.203 21:29:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:24:19.203 21:29:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:24:19.203 21:29:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.203 21:29:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.203 21:29:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:19.203 21:29:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.203 21:29:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:19.203 21:29:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.203 21:29:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.203 21:29:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.203 21:29:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.203 21:29:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.203 21:29:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.203 21:29:34 -- paths/export.sh@5 -- # export PATH 00:24:19.203 21:29:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.203 21:29:34 -- nvmf/common.sh@47 -- # : 0 00:24:19.203 21:29:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.203 21:29:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.203 21:29:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.203 21:29:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.203 21:29:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.203 21:29:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.203 21:29:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.203 21:29:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.203 21:29:34 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:19.203 21:29:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:19.203 21:29:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.203 21:29:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:19.203 21:29:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:19.203 21:29:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:19.203 21:29:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.203 21:29:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.203 21:29:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.203 21:29:34 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:24:19.203 21:29:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:19.203 21:29:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.203 21:29:34 -- common/autotest_common.sh@10 -- # set +x 00:24:24.477 21:29:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:24.477 21:29:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.477 21:29:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.477 21:29:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.477 21:29:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.477 21:29:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.477 21:29:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.477 21:29:38 -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.477 21:29:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.477 21:29:38 -- nvmf/common.sh@296 -- # e810=() 00:24:24.477 21:29:38 -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.477 21:29:38 -- nvmf/common.sh@297 -- # x722=() 00:24:24.477 21:29:38 -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.477 21:29:38 -- nvmf/common.sh@298 -- # mlx=() 00:24:24.477 21:29:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.478 21:29:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.478 21:29:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.478 21:29:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.478 21:29:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.478 21:29:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.478 21:29:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.478 21:29:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.478 21:29:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.478 21:29:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.478 21:29:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.478 21:29:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.478 21:29:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.478 21:29:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.478 21:29:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.478 21:29:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:24.478 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:24.478 21:29:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.478 21:29:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:24.478 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:24.478 21:29:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.478 21:29:38 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.478 21:29:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.478 21:29:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:24.478 21:29:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.478 21:29:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:24.478 Found net devices under 0000:27:00.0: cvl_0_0 00:24:24.478 21:29:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.478 21:29:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.478 21:29:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.478 21:29:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:24.478 21:29:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.478 21:29:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:24.478 Found net devices under 0000:27:00.1: cvl_0_1 00:24:24.478 21:29:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.478 21:29:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:24.478 21:29:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:24.478 21:29:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:24.478 21:29:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:24.478 21:29:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.478 21:29:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.478 21:29:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.478 21:29:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:24.478 21:29:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.478 21:29:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.478 21:29:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:24.478 21:29:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.478 21:29:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.478 21:29:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:24.478 21:29:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:24.478 21:29:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.478 21:29:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.478 21:29:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.478 21:29:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.478 21:29:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:24.478 21:29:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.478 21:29:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.478 21:29:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.478 21:29:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:24.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:24:24.478 00:24:24.478 --- 10.0.0.2 ping statistics --- 00:24:24.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.478 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:24:24.478 21:29:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:24:24.478 00:24:24.478 --- 10.0.0.1 ping statistics --- 00:24:24.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.478 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:24:24.478 21:29:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.478 21:29:39 -- nvmf/common.sh@411 -- # return 0 00:24:24.478 21:29:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:24.478 21:29:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.478 21:29:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:24.478 21:29:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:24.478 21:29:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.478 21:29:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:24.478 21:29:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:24.478 21:29:39 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:24.478 21:29:39 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:24.478 21:29:39 -- nvmf/common.sh@717 -- # local ip 00:24:24.478 21:29:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.478 21:29:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.478 21:29:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.478 21:29:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.478 21:29:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.478 21:29:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.478 21:29:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.478 21:29:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.478 21:29:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.478 21:29:39 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:24.478 21:29:39 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:24.478 21:29:39 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:24.478 21:29:39 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:24.478 21:29:39 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:24.478 21:29:39 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:24.478 21:29:39 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:24.478 21:29:39 -- nvmf/common.sh@628 -- # local block nvme 00:24:24.478 21:29:39 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:24.478 21:29:39 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:24.478 21:29:39 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:24.478 21:29:39 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:24:26.488 Waiting for block devices as requested 00:24:26.749 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:24:26.749 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:27.009 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:27.009 0000:cb:00.0 (8086 0a54): vfio-pci -> nvme 00:24:27.270 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:27.270 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:24:27.529 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:27.529 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:24:27.529 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:27.787 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:24:27.787 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:28.045 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:24:28.045 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:24:28.045 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:24:28.304 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:24:28.304 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:28.564 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:24:28.564 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:28.825 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:24:29.084 21:29:43 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:29.084 21:29:43 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:29.084 21:29:43 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:29.084 21:29:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:29.084 21:29:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:29.084 21:29:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:29.084 21:29:43 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:29.084 21:29:43 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:29.084 21:29:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:29.084 No valid GPT data, bailing 00:24:29.084 21:29:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:29.084 21:29:44 -- scripts/common.sh@391 -- # pt= 00:24:29.084 21:29:44 -- scripts/common.sh@392 -- # return 1 00:24:29.084 21:29:44 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:29.084 21:29:44 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:29.084 21:29:44 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:29.084 21:29:44 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:24:29.084 21:29:44 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:29.084 21:29:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:29.084 21:29:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:29.084 21:29:44 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:24:29.084 21:29:44 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:29.084 21:29:44 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:24:29.342 No valid GPT data, bailing 00:24:29.343 21:29:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:29.343 21:29:44 -- scripts/common.sh@391 -- # pt= 00:24:29.343 21:29:44 -- scripts/common.sh@392 -- # return 1 00:24:29.343 21:29:44 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:24:29.343 21:29:44 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:29.343 21:29:44 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme2n1 ]] 00:24:29.343 21:29:44 -- nvmf/common.sh@641 -- # is_block_zoned nvme2n1 00:24:29.343 21:29:44 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:24:29.343 21:29:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:24:29.343 21:29:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:29.343 21:29:44 -- nvmf/common.sh@642 -- # block_in_use nvme2n1 00:24:29.343 21:29:44 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:24:29.343 21:29:44 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:24:29.343 No valid GPT data, bailing 00:24:29.343 21:29:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:24:29.343 21:29:44 -- scripts/common.sh@391 -- # pt= 00:24:29.343 21:29:44 -- scripts/common.sh@392 -- # return 1 00:24:29.343 21:29:44 -- nvmf/common.sh@642 -- # nvme=/dev/nvme2n1 00:24:29.343 21:29:44 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme2n1 ]] 00:24:29.343 21:29:44 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:29.343 21:29:44 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:29.343 21:29:44 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:29.343 21:29:44 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:29.343 21:29:44 -- nvmf/common.sh@656 -- # echo 1 00:24:29.343 21:29:44 -- nvmf/common.sh@657 -- # echo /dev/nvme2n1 00:24:29.343 21:29:44 -- nvmf/common.sh@658 -- # echo 1 00:24:29.343 21:29:44 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:29.343 21:29:44 -- nvmf/common.sh@661 -- # echo tcp 00:24:29.343 21:29:44 -- nvmf/common.sh@662 -- # echo 4420 00:24:29.343 21:29:44 -- nvmf/common.sh@663 -- # echo ipv4 00:24:29.343 21:29:44 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:29.343 21:29:44 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -a 10.0.0.1 -t tcp -s 4420 00:24:29.343 00:24:29.343 Discovery Log Number of Records 2, Generation counter 2 00:24:29.343 =====Discovery Log Entry 0====== 00:24:29.343 trtype: tcp 00:24:29.343 adrfam: ipv4 00:24:29.343 subtype: current discovery subsystem 00:24:29.343 treq: not specified, sq flow control disable supported 00:24:29.343 portid: 1 00:24:29.343 trsvcid: 4420 00:24:29.343 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:29.343 traddr: 10.0.0.1 00:24:29.343 eflags: none 00:24:29.343 sectype: none 00:24:29.343 =====Discovery Log Entry 1====== 00:24:29.343 trtype: tcp 00:24:29.343 adrfam: ipv4 00:24:29.343 subtype: nvme subsystem 00:24:29.343 treq: not specified, sq flow control disable supported 00:24:29.343 portid: 1 00:24:29.343 trsvcid: 4420 00:24:29.343 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:29.343 traddr: 10.0.0.1 00:24:29.343 eflags: none 00:24:29.343 sectype: none 00:24:29.343 21:29:44 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:29.343 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:29.343 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.602 ===================================================== 00:24:29.602 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:29.602 ===================================================== 00:24:29.602 Controller Capabilities/Features 00:24:29.602 ================================ 00:24:29.602 Vendor ID: 0000 00:24:29.602 Subsystem Vendor ID: 0000 00:24:29.602 Serial Number: c1cc3b2a535279686d56 00:24:29.602 Model Number: Linux 00:24:29.602 Firmware Version: 6.7.0-68 00:24:29.602 Recommended Arb Burst: 0 00:24:29.602 IEEE OUI Identifier: 00 00 00 00:24:29.602 Multi-path I/O 00:24:29.602 May have multiple subsystem ports: No 00:24:29.602 May have multiple controllers: No 00:24:29.602 Associated with SR-IOV VF: No 00:24:29.602 Max Data Transfer Size: Unlimited 00:24:29.602 Max Number of Namespaces: 0 00:24:29.602 Max Number of I/O Queues: 1024 00:24:29.602 NVMe Specification Version (VS): 1.3 00:24:29.602 NVMe Specification Version (Identify): 1.3 00:24:29.602 Maximum Queue Entries: 1024 00:24:29.602 Contiguous Queues Required: No 00:24:29.602 Arbitration Mechanisms Supported 00:24:29.602 Weighted Round Robin: Not Supported 00:24:29.602 Vendor Specific: Not Supported 00:24:29.602 Reset Timeout: 7500 ms 00:24:29.602 Doorbell Stride: 4 bytes 00:24:29.602 NVM Subsystem Reset: Not Supported 00:24:29.602 Command Sets Supported 00:24:29.602 NVM Command Set: Supported 00:24:29.602 Boot Partition: Not Supported 00:24:29.602 Memory Page Size Minimum: 4096 bytes 00:24:29.602 Memory Page Size Maximum: 4096 bytes 00:24:29.602 Persistent Memory Region: Not Supported 00:24:29.602 Optional Asynchronous Events Supported 00:24:29.602 Namespace Attribute Notices: Not Supported 00:24:29.602 Firmware Activation Notices: Not Supported 00:24:29.602 ANA Change Notices: Not Supported 00:24:29.602 PLE Aggregate Log Change Notices: Not Supported 00:24:29.602 LBA Status Info Alert Notices: Not Supported 00:24:29.602 EGE Aggregate Log Change Notices: Not Supported 00:24:29.602 Normal NVM Subsystem Shutdown event: Not Supported 00:24:29.602 Zone Descriptor Change Notices: Not Supported 00:24:29.602 Discovery Log Change Notices: Supported 00:24:29.602 Controller Attributes 00:24:29.602 128-bit Host Identifier: Not Supported 00:24:29.602 Non-Operational Permissive Mode: Not Supported 00:24:29.602 NVM Sets: Not Supported 00:24:29.602 Read Recovery Levels: Not Supported 00:24:29.602 Endurance Groups: Not Supported 00:24:29.602 Predictable Latency Mode: Not Supported 00:24:29.603 Traffic Based Keep ALive: Not Supported 00:24:29.603 Namespace Granularity: Not Supported 00:24:29.603 SQ Associations: Not Supported 00:24:29.603 UUID List: Not Supported 00:24:29.603 Multi-Domain Subsystem: Not Supported 00:24:29.603 Fixed Capacity Management: Not Supported 00:24:29.603 Variable Capacity Management: Not Supported 00:24:29.603 Delete Endurance Group: Not Supported 00:24:29.603 Delete NVM Set: Not Supported 00:24:29.603 Extended LBA Formats Supported: Not Supported 00:24:29.603 Flexible Data Placement Supported: Not Supported 00:24:29.603 00:24:29.603 Controller Memory Buffer Support 00:24:29.603 ================================ 00:24:29.603 Supported: No 00:24:29.603 00:24:29.603 Persistent Memory Region Support 00:24:29.603 ================================ 00:24:29.603 Supported: No 00:24:29.603 00:24:29.603 Admin Command Set Attributes 00:24:29.603 ============================ 00:24:29.603 Security Send/Receive: Not Supported 00:24:29.603 Format NVM: Not Supported 00:24:29.603 Firmware Activate/Download: Not Supported 00:24:29.603 Namespace Management: Not Supported 00:24:29.603 Device Self-Test: Not Supported 00:24:29.603 Directives: Not Supported 00:24:29.603 NVMe-MI: Not Supported 00:24:29.603 Virtualization Management: Not Supported 00:24:29.603 Doorbell Buffer Config: Not Supported 00:24:29.603 Get LBA Status Capability: Not Supported 00:24:29.603 Command & Feature Lockdown Capability: Not Supported 00:24:29.603 Abort Command Limit: 1 00:24:29.603 Async Event Request Limit: 1 00:24:29.603 Number of Firmware Slots: N/A 00:24:29.603 Firmware Slot 1 Read-Only: N/A 00:24:29.603 Firmware Activation Without Reset: N/A 00:24:29.603 Multiple Update Detection Support: N/A 00:24:29.603 Firmware Update Granularity: No Information Provided 00:24:29.603 Per-Namespace SMART Log: No 00:24:29.603 Asymmetric Namespace Access Log Page: Not Supported 00:24:29.603 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:29.603 Command Effects Log Page: Not Supported 00:24:29.603 Get Log Page Extended Data: Supported 00:24:29.603 Telemetry Log Pages: Not Supported 00:24:29.603 Persistent Event Log Pages: Not Supported 00:24:29.603 Supported Log Pages Log Page: May Support 00:24:29.603 Commands Supported & Effects Log Page: Not Supported 00:24:29.603 Feature Identifiers & Effects Log Page:May Support 00:24:29.603 NVMe-MI Commands & Effects Log Page: May Support 00:24:29.603 Data Area 4 for Telemetry Log: Not Supported 00:24:29.603 Error Log Page Entries Supported: 1 00:24:29.603 Keep Alive: Not Supported 00:24:29.603 00:24:29.603 NVM Command Set Attributes 00:24:29.603 ========================== 00:24:29.603 Submission Queue Entry Size 00:24:29.603 Max: 1 00:24:29.603 Min: 1 00:24:29.603 Completion Queue Entry Size 00:24:29.603 Max: 1 00:24:29.603 Min: 1 00:24:29.603 Number of Namespaces: 0 00:24:29.603 Compare Command: Not Supported 00:24:29.603 Write Uncorrectable Command: Not Supported 00:24:29.603 Dataset Management Command: Not Supported 00:24:29.603 Write Zeroes Command: Not Supported 00:24:29.603 Set Features Save Field: Not Supported 00:24:29.603 Reservations: Not Supported 00:24:29.603 Timestamp: Not Supported 00:24:29.603 Copy: Not Supported 00:24:29.603 Volatile Write Cache: Not Present 00:24:29.603 Atomic Write Unit (Normal): 1 00:24:29.603 Atomic Write Unit (PFail): 1 00:24:29.603 Atomic Compare & Write Unit: 1 00:24:29.603 Fused Compare & Write: Not Supported 00:24:29.603 Scatter-Gather List 00:24:29.603 SGL Command Set: Supported 00:24:29.603 SGL Keyed: Not Supported 00:24:29.603 SGL Bit Bucket Descriptor: Not Supported 00:24:29.603 SGL Metadata Pointer: Not Supported 00:24:29.603 Oversized SGL: Not Supported 00:24:29.603 SGL Metadata Address: Not Supported 00:24:29.603 SGL Offset: Supported 00:24:29.603 Transport SGL Data Block: Not Supported 00:24:29.603 Replay Protected Memory Block: Not Supported 00:24:29.603 00:24:29.603 Firmware Slot Information 00:24:29.603 ========================= 00:24:29.603 Active slot: 0 00:24:29.603 00:24:29.603 00:24:29.603 Error Log 00:24:29.603 ========= 00:24:29.603 00:24:29.603 Active Namespaces 00:24:29.603 ================= 00:24:29.603 Discovery Log Page 00:24:29.603 ================== 00:24:29.603 Generation Counter: 2 00:24:29.603 Number of Records: 2 00:24:29.603 Record Format: 0 00:24:29.603 00:24:29.603 Discovery Log Entry 0 00:24:29.603 ---------------------- 00:24:29.603 Transport Type: 3 (TCP) 00:24:29.603 Address Family: 1 (IPv4) 00:24:29.603 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:29.603 Entry Flags: 00:24:29.603 Duplicate Returned Information: 0 00:24:29.603 Explicit Persistent Connection Support for Discovery: 0 00:24:29.603 Transport Requirements: 00:24:29.603 Secure Channel: Not Specified 00:24:29.603 Port ID: 1 (0x0001) 00:24:29.603 Controller ID: 65535 (0xffff) 00:24:29.603 Admin Max SQ Size: 32 00:24:29.603 Transport Service Identifier: 4420 00:24:29.603 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:29.603 Transport Address: 10.0.0.1 00:24:29.603 Discovery Log Entry 1 00:24:29.603 ---------------------- 00:24:29.603 Transport Type: 3 (TCP) 00:24:29.603 Address Family: 1 (IPv4) 00:24:29.603 Subsystem Type: 2 (NVM Subsystem) 00:24:29.603 Entry Flags: 00:24:29.603 Duplicate Returned Information: 0 00:24:29.603 Explicit Persistent Connection Support for Discovery: 0 00:24:29.603 Transport Requirements: 00:24:29.603 Secure Channel: Not Specified 00:24:29.603 Port ID: 1 (0x0001) 00:24:29.603 Controller ID: 65535 (0xffff) 00:24:29.603 Admin Max SQ Size: 32 00:24:29.603 Transport Service Identifier: 4420 00:24:29.603 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:29.603 Transport Address: 10.0.0.1 00:24:29.603 21:29:44 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:29.603 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.603 get_feature(0x01) failed 00:24:29.603 get_feature(0x02) failed 00:24:29.603 get_feature(0x04) failed 00:24:29.603 ===================================================== 00:24:29.603 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:29.603 ===================================================== 00:24:29.603 Controller Capabilities/Features 00:24:29.603 ================================ 00:24:29.603 Vendor ID: 0000 00:24:29.603 Subsystem Vendor ID: 0000 00:24:29.603 Serial Number: 19e56a1ae04335973e20 00:24:29.603 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:29.603 Firmware Version: 6.7.0-68 00:24:29.603 Recommended Arb Burst: 6 00:24:29.603 IEEE OUI Identifier: 00 00 00 00:24:29.603 Multi-path I/O 00:24:29.603 May have multiple subsystem ports: Yes 00:24:29.603 May have multiple controllers: Yes 00:24:29.603 Associated with SR-IOV VF: No 00:24:29.603 Max Data Transfer Size: Unlimited 00:24:29.603 Max Number of Namespaces: 1024 00:24:29.603 Max Number of I/O Queues: 128 00:24:29.603 NVMe Specification Version (VS): 1.3 00:24:29.603 NVMe Specification Version (Identify): 1.3 00:24:29.603 Maximum Queue Entries: 1024 00:24:29.603 Contiguous Queues Required: No 00:24:29.603 Arbitration Mechanisms Supported 00:24:29.603 Weighted Round Robin: Not Supported 00:24:29.603 Vendor Specific: Not Supported 00:24:29.603 Reset Timeout: 7500 ms 00:24:29.603 Doorbell Stride: 4 bytes 00:24:29.603 NVM Subsystem Reset: Not Supported 00:24:29.603 Command Sets Supported 00:24:29.603 NVM Command Set: Supported 00:24:29.603 Boot Partition: Not Supported 00:24:29.603 Memory Page Size Minimum: 4096 bytes 00:24:29.603 Memory Page Size Maximum: 4096 bytes 00:24:29.604 Persistent Memory Region: Not Supported 00:24:29.604 Optional Asynchronous Events Supported 00:24:29.604 Namespace Attribute Notices: Supported 00:24:29.604 Firmware Activation Notices: Not Supported 00:24:29.604 ANA Change Notices: Supported 00:24:29.604 PLE Aggregate Log Change Notices: Not Supported 00:24:29.604 LBA Status Info Alert Notices: Not Supported 00:24:29.604 EGE Aggregate Log Change Notices: Not Supported 00:24:29.604 Normal NVM Subsystem Shutdown event: Not Supported 00:24:29.604 Zone Descriptor Change Notices: Not Supported 00:24:29.604 Discovery Log Change Notices: Not Supported 00:24:29.604 Controller Attributes 00:24:29.604 128-bit Host Identifier: Supported 00:24:29.604 Non-Operational Permissive Mode: Not Supported 00:24:29.604 NVM Sets: Not Supported 00:24:29.604 Read Recovery Levels: Not Supported 00:24:29.604 Endurance Groups: Not Supported 00:24:29.604 Predictable Latency Mode: Not Supported 00:24:29.604 Traffic Based Keep ALive: Supported 00:24:29.604 Namespace Granularity: Not Supported 00:24:29.604 SQ Associations: Not Supported 00:24:29.604 UUID List: Not Supported 00:24:29.604 Multi-Domain Subsystem: Not Supported 00:24:29.604 Fixed Capacity Management: Not Supported 00:24:29.604 Variable Capacity Management: Not Supported 00:24:29.604 Delete Endurance Group: Not Supported 00:24:29.604 Delete NVM Set: Not Supported 00:24:29.604 Extended LBA Formats Supported: Not Supported 00:24:29.604 Flexible Data Placement Supported: Not Supported 00:24:29.604 00:24:29.604 Controller Memory Buffer Support 00:24:29.604 ================================ 00:24:29.604 Supported: No 00:24:29.604 00:24:29.604 Persistent Memory Region Support 00:24:29.604 ================================ 00:24:29.604 Supported: No 00:24:29.604 00:24:29.604 Admin Command Set Attributes 00:24:29.604 ============================ 00:24:29.604 Security Send/Receive: Not Supported 00:24:29.604 Format NVM: Not Supported 00:24:29.604 Firmware Activate/Download: Not Supported 00:24:29.604 Namespace Management: Not Supported 00:24:29.604 Device Self-Test: Not Supported 00:24:29.604 Directives: Not Supported 00:24:29.604 NVMe-MI: Not Supported 00:24:29.604 Virtualization Management: Not Supported 00:24:29.604 Doorbell Buffer Config: Not Supported 00:24:29.604 Get LBA Status Capability: Not Supported 00:24:29.604 Command & Feature Lockdown Capability: Not Supported 00:24:29.604 Abort Command Limit: 4 00:24:29.604 Async Event Request Limit: 4 00:24:29.604 Number of Firmware Slots: N/A 00:24:29.604 Firmware Slot 1 Read-Only: N/A 00:24:29.604 Firmware Activation Without Reset: N/A 00:24:29.604 Multiple Update Detection Support: N/A 00:24:29.604 Firmware Update Granularity: No Information Provided 00:24:29.604 Per-Namespace SMART Log: Yes 00:24:29.604 Asymmetric Namespace Access Log Page: Supported 00:24:29.604 ANA Transition Time : 10 sec 00:24:29.604 00:24:29.604 Asymmetric Namespace Access Capabilities 00:24:29.604 ANA Optimized State : Supported 00:24:29.604 ANA Non-Optimized State : Supported 00:24:29.604 ANA Inaccessible State : Supported 00:24:29.604 ANA Persistent Loss State : Supported 00:24:29.604 ANA Change State : Supported 00:24:29.604 ANAGRPID is not changed : No 00:24:29.604 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:29.604 00:24:29.604 ANA Group Identifier Maximum : 128 00:24:29.604 Number of ANA Group Identifiers : 128 00:24:29.604 Max Number of Allowed Namespaces : 1024 00:24:29.604 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:29.604 Command Effects Log Page: Supported 00:24:29.604 Get Log Page Extended Data: Supported 00:24:29.604 Telemetry Log Pages: Not Supported 00:24:29.604 Persistent Event Log Pages: Not Supported 00:24:29.604 Supported Log Pages Log Page: May Support 00:24:29.604 Commands Supported & Effects Log Page: Not Supported 00:24:29.604 Feature Identifiers & Effects Log Page:May Support 00:24:29.604 NVMe-MI Commands & Effects Log Page: May Support 00:24:29.604 Data Area 4 for Telemetry Log: Not Supported 00:24:29.604 Error Log Page Entries Supported: 128 00:24:29.604 Keep Alive: Supported 00:24:29.604 Keep Alive Granularity: 1000 ms 00:24:29.604 00:24:29.604 NVM Command Set Attributes 00:24:29.604 ========================== 00:24:29.604 Submission Queue Entry Size 00:24:29.604 Max: 64 00:24:29.604 Min: 64 00:24:29.604 Completion Queue Entry Size 00:24:29.604 Max: 16 00:24:29.604 Min: 16 00:24:29.604 Number of Namespaces: 1024 00:24:29.604 Compare Command: Not Supported 00:24:29.604 Write Uncorrectable Command: Not Supported 00:24:29.604 Dataset Management Command: Supported 00:24:29.604 Write Zeroes Command: Supported 00:24:29.604 Set Features Save Field: Not Supported 00:24:29.604 Reservations: Not Supported 00:24:29.604 Timestamp: Not Supported 00:24:29.604 Copy: Not Supported 00:24:29.604 Volatile Write Cache: Present 00:24:29.604 Atomic Write Unit (Normal): 1 00:24:29.604 Atomic Write Unit (PFail): 1 00:24:29.604 Atomic Compare & Write Unit: 1 00:24:29.604 Fused Compare & Write: Not Supported 00:24:29.604 Scatter-Gather List 00:24:29.604 SGL Command Set: Supported 00:24:29.604 SGL Keyed: Not Supported 00:24:29.604 SGL Bit Bucket Descriptor: Not Supported 00:24:29.604 SGL Metadata Pointer: Not Supported 00:24:29.604 Oversized SGL: Not Supported 00:24:29.604 SGL Metadata Address: Not Supported 00:24:29.604 SGL Offset: Supported 00:24:29.604 Transport SGL Data Block: Not Supported 00:24:29.604 Replay Protected Memory Block: Not Supported 00:24:29.604 00:24:29.604 Firmware Slot Information 00:24:29.604 ========================= 00:24:29.604 Active slot: 0 00:24:29.604 00:24:29.604 Asymmetric Namespace Access 00:24:29.604 =========================== 00:24:29.604 Change Count : 0 00:24:29.604 Number of ANA Group Descriptors : 1 00:24:29.604 ANA Group Descriptor : 0 00:24:29.604 ANA Group ID : 1 00:24:29.604 Number of NSID Values : 1 00:24:29.604 Change Count : 0 00:24:29.604 ANA State : 1 00:24:29.604 Namespace Identifier : 1 00:24:29.604 00:24:29.604 Commands Supported and Effects 00:24:29.604 ============================== 00:24:29.604 Admin Commands 00:24:29.604 -------------- 00:24:29.604 Get Log Page (02h): Supported 00:24:29.604 Identify (06h): Supported 00:24:29.604 Abort (08h): Supported 00:24:29.604 Set Features (09h): Supported 00:24:29.604 Get Features (0Ah): Supported 00:24:29.604 Asynchronous Event Request (0Ch): Supported 00:24:29.604 Keep Alive (18h): Supported 00:24:29.604 I/O Commands 00:24:29.604 ------------ 00:24:29.604 Flush (00h): Supported 00:24:29.604 Write (01h): Supported LBA-Change 00:24:29.604 Read (02h): Supported 00:24:29.604 Write Zeroes (08h): Supported LBA-Change 00:24:29.604 Dataset Management (09h): Supported 00:24:29.604 00:24:29.604 Error Log 00:24:29.604 ========= 00:24:29.604 Entry: 0 00:24:29.604 Error Count: 0x3 00:24:29.604 Submission Queue Id: 0x0 00:24:29.604 Command Id: 0x5 00:24:29.604 Phase Bit: 0 00:24:29.604 Status Code: 0x2 00:24:29.604 Status Code Type: 0x0 00:24:29.604 Do Not Retry: 1 00:24:29.604 Error Location: 0x28 00:24:29.604 LBA: 0x0 00:24:29.604 Namespace: 0x0 00:24:29.604 Vendor Log Page: 0x0 00:24:29.604 ----------- 00:24:29.604 Entry: 1 00:24:29.604 Error Count: 0x2 00:24:29.604 Submission Queue Id: 0x0 00:24:29.604 Command Id: 0x5 00:24:29.604 Phase Bit: 0 00:24:29.604 Status Code: 0x2 00:24:29.604 Status Code Type: 0x0 00:24:29.604 Do Not Retry: 1 00:24:29.604 Error Location: 0x28 00:24:29.604 LBA: 0x0 00:24:29.604 Namespace: 0x0 00:24:29.604 Vendor Log Page: 0x0 00:24:29.604 ----------- 00:24:29.604 Entry: 2 00:24:29.604 Error Count: 0x1 00:24:29.604 Submission Queue Id: 0x0 00:24:29.604 Command Id: 0x4 00:24:29.604 Phase Bit: 0 00:24:29.604 Status Code: 0x2 00:24:29.604 Status Code Type: 0x0 00:24:29.604 Do Not Retry: 1 00:24:29.604 Error Location: 0x28 00:24:29.604 LBA: 0x0 00:24:29.604 Namespace: 0x0 00:24:29.604 Vendor Log Page: 0x0 00:24:29.604 00:24:29.604 Number of Queues 00:24:29.604 ================ 00:24:29.604 Number of I/O Submission Queues: 128 00:24:29.604 Number of I/O Completion Queues: 128 00:24:29.604 00:24:29.604 ZNS Specific Controller Data 00:24:29.604 ============================ 00:24:29.604 Zone Append Size Limit: 0 00:24:29.604 00:24:29.604 00:24:29.604 Active Namespaces 00:24:29.605 ================= 00:24:29.605 get_feature(0x05) failed 00:24:29.605 Namespace ID:1 00:24:29.605 Command Set Identifier: NVM (00h) 00:24:29.605 Deallocate: Supported 00:24:29.605 Deallocated/Unwritten Error: Not Supported 00:24:29.605 Deallocated Read Value: Unknown 00:24:29.605 Deallocate in Write Zeroes: Not Supported 00:24:29.605 Deallocated Guard Field: 0xFFFF 00:24:29.605 Flush: Supported 00:24:29.605 Reservation: Not Supported 00:24:29.605 Namespace Sharing Capabilities: Multiple Controllers 00:24:29.605 Size (in LBAs): 3907029168 (1863GiB) 00:24:29.605 Capacity (in LBAs): 3907029168 (1863GiB) 00:24:29.605 Utilization (in LBAs): 3907029168 (1863GiB) 00:24:29.605 UUID: 56dff30e-b6e1-4947-b649-42fe13b4adc5 00:24:29.605 Thin Provisioning: Not Supported 00:24:29.605 Per-NS Atomic Units: Yes 00:24:29.605 Atomic Boundary Size (Normal): 0 00:24:29.605 Atomic Boundary Size (PFail): 0 00:24:29.605 Atomic Boundary Offset: 0 00:24:29.605 NGUID/EUI64 Never Reused: No 00:24:29.605 ANA group ID: 1 00:24:29.605 Namespace Write Protected: No 00:24:29.605 Number of LBA Formats: 1 00:24:29.605 Current LBA Format: LBA Format #00 00:24:29.605 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:29.605 00:24:29.605 21:29:44 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:29.605 21:29:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:29.605 21:29:44 -- nvmf/common.sh@117 -- # sync 00:24:29.605 21:29:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.605 21:29:44 -- nvmf/common.sh@120 -- # set +e 00:24:29.605 21:29:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.605 21:29:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.605 rmmod nvme_tcp 00:24:29.605 rmmod nvme_fabrics 00:24:29.605 21:29:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.605 21:29:44 -- nvmf/common.sh@124 -- # set -e 00:24:29.605 21:29:44 -- nvmf/common.sh@125 -- # return 0 00:24:29.605 21:29:44 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:24:29.605 21:29:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:29.605 21:29:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:29.605 21:29:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:29.605 21:29:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:29.605 21:29:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:29.605 21:29:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.605 21:29:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.605 21:29:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.203 21:29:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:32.203 21:29:46 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:32.203 21:29:46 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:32.203 21:29:46 -- nvmf/common.sh@675 -- # echo 0 00:24:32.203 21:29:46 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:32.203 21:29:46 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:32.203 21:29:46 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:32.203 21:29:46 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:32.203 21:29:46 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:32.203 21:29:46 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:32.203 21:29:46 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:24:34.740 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:34.740 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:34.740 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:34.740 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:24:34.740 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:34.740 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:24:34.740 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:34.740 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:24:34.740 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:34.740 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:24:34.740 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:24:34.740 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:24:34.740 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:34.740 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:24:34.999 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:34.999 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:24:36.376 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:24:36.376 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:24:36.944 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:24:37.202 00:24:37.202 real 0m18.145s 00:24:37.202 user 0m3.645s 00:24:37.202 sys 0m8.070s 00:24:37.202 21:29:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:37.202 21:29:52 -- common/autotest_common.sh@10 -- # set +x 00:24:37.202 ************************************ 00:24:37.202 END TEST nvmf_identify_kernel_target 00:24:37.202 ************************************ 00:24:37.202 21:29:52 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:37.202 21:29:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:37.202 21:29:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:37.202 21:29:52 -- common/autotest_common.sh@10 -- # set +x 00:24:37.464 ************************************ 00:24:37.464 START TEST nvmf_auth 00:24:37.464 ************************************ 00:24:37.464 21:29:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:37.464 * Looking for test storage... 00:24:37.464 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:24:37.464 21:29:52 -- host/auth.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.464 21:29:52 -- nvmf/common.sh@7 -- # uname -s 00:24:37.464 21:29:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.464 21:29:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.464 21:29:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.464 21:29:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.464 21:29:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.464 21:29:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.464 21:29:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.464 21:29:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.464 21:29:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.464 21:29:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.464 21:29:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:24:37.464 21:29:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:24:37.464 21:29:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.464 21:29:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.464 21:29:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:37.464 21:29:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.464 21:29:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:37.464 21:29:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.464 21:29:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.464 21:29:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.464 21:29:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.464 21:29:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.464 21:29:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.464 21:29:52 -- paths/export.sh@5 -- # export PATH 00:24:37.464 21:29:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.464 21:29:52 -- nvmf/common.sh@47 -- # : 0 00:24:37.464 21:29:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:37.464 21:29:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:37.464 21:29:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.464 21:29:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.464 21:29:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.464 21:29:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:37.464 21:29:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:37.464 21:29:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:37.464 21:29:52 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:37.464 21:29:52 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:37.464 21:29:52 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:37.464 21:29:52 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:37.464 21:29:52 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:37.464 21:29:52 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:37.464 21:29:52 -- host/auth.sh@21 -- # keys=() 00:24:37.464 21:29:52 -- host/auth.sh@77 -- # nvmftestinit 00:24:37.464 21:29:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:37.464 21:29:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.464 21:29:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:37.464 21:29:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:37.464 21:29:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:37.464 21:29:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.464 21:29:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:37.464 21:29:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.464 21:29:52 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:24:37.464 21:29:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:37.464 21:29:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:37.464 21:29:52 -- common/autotest_common.sh@10 -- # set +x 00:24:42.746 21:29:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:42.746 21:29:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:42.746 21:29:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:42.746 21:29:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:42.746 21:29:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:42.746 21:29:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:42.746 21:29:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:42.746 21:29:57 -- nvmf/common.sh@295 -- # net_devs=() 00:24:42.746 21:29:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:42.746 21:29:57 -- nvmf/common.sh@296 -- # e810=() 00:24:42.746 21:29:57 -- nvmf/common.sh@296 -- # local -ga e810 00:24:42.746 21:29:57 -- nvmf/common.sh@297 -- # x722=() 00:24:42.746 21:29:57 -- nvmf/common.sh@297 -- # local -ga x722 00:24:42.746 21:29:57 -- nvmf/common.sh@298 -- # mlx=() 00:24:42.746 21:29:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:42.746 21:29:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.746 21:29:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.746 21:29:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.746 21:29:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.746 21:29:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.746 21:29:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.746 21:29:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.746 21:29:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.746 21:29:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.746 21:29:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.746 21:29:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.746 21:29:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:42.746 21:29:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:42.746 21:29:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.746 21:29:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:42.746 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:42.746 21:29:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.746 21:29:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:42.746 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:42.746 21:29:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:42.746 21:29:57 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.746 21:29:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.746 21:29:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:42.746 21:29:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.746 21:29:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:42.746 Found net devices under 0000:27:00.0: cvl_0_0 00:24:42.746 21:29:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.746 21:29:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.746 21:29:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.746 21:29:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:42.746 21:29:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.746 21:29:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:42.746 Found net devices under 0000:27:00.1: cvl_0_1 00:24:42.746 21:29:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.746 21:29:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:42.746 21:29:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:42.746 21:29:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:42.746 21:29:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.746 21:29:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.746 21:29:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.746 21:29:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:42.746 21:29:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.746 21:29:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.746 21:29:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:42.746 21:29:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.746 21:29:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.746 21:29:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:42.746 21:29:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:42.746 21:29:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.746 21:29:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.746 21:29:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.746 21:29:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.746 21:29:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:42.746 21:29:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.746 21:29:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.746 21:29:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.746 21:29:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:42.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:24:42.746 00:24:42.746 --- 10.0.0.2 ping statistics --- 00:24:42.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.746 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:24:42.746 21:29:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:24:42.746 00:24:42.746 --- 10.0.0.1 ping statistics --- 00:24:42.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.746 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:24:42.746 21:29:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.746 21:29:57 -- nvmf/common.sh@411 -- # return 0 00:24:42.746 21:29:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:42.746 21:29:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.746 21:29:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:42.746 21:29:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.746 21:29:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:42.746 21:29:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:42.746 21:29:57 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:24:42.746 21:29:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:42.746 21:29:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:42.746 21:29:57 -- common/autotest_common.sh@10 -- # set +x 00:24:42.746 21:29:57 -- nvmf/common.sh@470 -- # nvmfpid=1336641 00:24:42.746 21:29:57 -- nvmf/common.sh@471 -- # waitforlisten 1336641 00:24:42.746 21:29:57 -- common/autotest_common.sh@817 -- # '[' -z 1336641 ']' 00:24:42.746 21:29:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.746 21:29:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:42.746 21:29:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.746 21:29:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:42.746 21:29:57 -- common/autotest_common.sh@10 -- # set +x 00:24:42.746 21:29:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:43.683 21:29:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:43.683 21:29:58 -- common/autotest_common.sh@850 -- # return 0 00:24:43.683 21:29:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:43.683 21:29:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:43.683 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:24:43.683 21:29:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.683 21:29:58 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:43.683 21:29:58 -- host/auth.sh@81 -- # gen_key null 32 00:24:43.683 21:29:58 -- host/auth.sh@53 -- # local digest len file key 00:24:43.683 21:29:58 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:43.683 21:29:58 -- host/auth.sh@54 -- # local -A digests 00:24:43.683 21:29:58 -- host/auth.sh@56 -- # digest=null 00:24:43.683 21:29:58 -- host/auth.sh@56 -- # len=32 00:24:43.683 21:29:58 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:43.683 21:29:58 -- host/auth.sh@57 -- # key=83ac93fcda7985afc37da17e62a14130 00:24:43.683 21:29:58 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:43.683 21:29:58 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.fWH 00:24:43.683 21:29:58 -- host/auth.sh@59 -- # format_dhchap_key 83ac93fcda7985afc37da17e62a14130 0 00:24:43.683 21:29:58 -- nvmf/common.sh@708 -- # format_key DHHC-1 83ac93fcda7985afc37da17e62a14130 0 00:24:43.683 21:29:58 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:43.683 21:29:58 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:43.683 21:29:58 -- nvmf/common.sh@693 -- # key=83ac93fcda7985afc37da17e62a14130 00:24:43.683 21:29:58 -- nvmf/common.sh@693 -- # digest=0 00:24:43.683 21:29:58 -- nvmf/common.sh@694 -- # python - 00:24:43.683 21:29:58 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.fWH 00:24:43.683 21:29:58 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.fWH 00:24:43.683 21:29:58 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.fWH 00:24:43.683 21:29:58 -- host/auth.sh@82 -- # gen_key null 48 00:24:43.683 21:29:58 -- host/auth.sh@53 -- # local digest len file key 00:24:43.683 21:29:58 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:43.683 21:29:58 -- host/auth.sh@54 -- # local -A digests 00:24:43.683 21:29:58 -- host/auth.sh@56 -- # digest=null 00:24:43.683 21:29:58 -- host/auth.sh@56 -- # len=48 00:24:43.683 21:29:58 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:43.683 21:29:58 -- host/auth.sh@57 -- # key=8849827bba08ffa7ab4e377d4560cca70aba77a7c4e38805 00:24:43.683 21:29:58 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:43.683 21:29:58 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.YDo 00:24:43.683 21:29:58 -- host/auth.sh@59 -- # format_dhchap_key 8849827bba08ffa7ab4e377d4560cca70aba77a7c4e38805 0 00:24:43.683 21:29:58 -- nvmf/common.sh@708 -- # format_key DHHC-1 8849827bba08ffa7ab4e377d4560cca70aba77a7c4e38805 0 00:24:43.683 21:29:58 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:43.683 21:29:58 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:43.683 21:29:58 -- nvmf/common.sh@693 -- # key=8849827bba08ffa7ab4e377d4560cca70aba77a7c4e38805 00:24:43.683 21:29:58 -- nvmf/common.sh@693 -- # digest=0 00:24:43.683 21:29:58 -- nvmf/common.sh@694 -- # python - 00:24:43.683 21:29:58 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.YDo 00:24:43.683 21:29:58 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.YDo 00:24:43.684 21:29:58 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.YDo 00:24:43.684 21:29:58 -- host/auth.sh@83 -- # gen_key sha256 32 00:24:43.684 21:29:58 -- host/auth.sh@53 -- # local digest len file key 00:24:43.684 21:29:58 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:43.684 21:29:58 -- host/auth.sh@54 -- # local -A digests 00:24:43.684 21:29:58 -- host/auth.sh@56 -- # digest=sha256 00:24:43.684 21:29:58 -- host/auth.sh@56 -- # len=32 00:24:43.684 21:29:58 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:43.684 21:29:58 -- host/auth.sh@57 -- # key=5e33f64484791f005bf705eb6d627f76 00:24:43.684 21:29:58 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:24:43.684 21:29:58 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.Lfk 00:24:43.684 21:29:58 -- host/auth.sh@59 -- # format_dhchap_key 5e33f64484791f005bf705eb6d627f76 1 00:24:43.684 21:29:58 -- nvmf/common.sh@708 -- # format_key DHHC-1 5e33f64484791f005bf705eb6d627f76 1 00:24:43.684 21:29:58 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:43.684 21:29:58 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:43.684 21:29:58 -- nvmf/common.sh@693 -- # key=5e33f64484791f005bf705eb6d627f76 00:24:43.684 21:29:58 -- nvmf/common.sh@693 -- # digest=1 00:24:43.684 21:29:58 -- nvmf/common.sh@694 -- # python - 00:24:43.684 21:29:58 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.Lfk 00:24:43.684 21:29:58 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.Lfk 00:24:43.684 21:29:58 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.Lfk 00:24:43.684 21:29:58 -- host/auth.sh@84 -- # gen_key sha384 48 00:24:43.684 21:29:58 -- host/auth.sh@53 -- # local digest len file key 00:24:43.684 21:29:58 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:43.684 21:29:58 -- host/auth.sh@54 -- # local -A digests 00:24:43.684 21:29:58 -- host/auth.sh@56 -- # digest=sha384 00:24:43.684 21:29:58 -- host/auth.sh@56 -- # len=48 00:24:43.684 21:29:58 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:43.684 21:29:58 -- host/auth.sh@57 -- # key=11cbf308458634a36480e715afd6e3ebf322eed6140935d3 00:24:43.684 21:29:58 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:24:43.684 21:29:58 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.L5D 00:24:43.684 21:29:58 -- host/auth.sh@59 -- # format_dhchap_key 11cbf308458634a36480e715afd6e3ebf322eed6140935d3 2 00:24:43.684 21:29:58 -- nvmf/common.sh@708 -- # format_key DHHC-1 11cbf308458634a36480e715afd6e3ebf322eed6140935d3 2 00:24:43.684 21:29:58 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:43.684 21:29:58 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:43.684 21:29:58 -- nvmf/common.sh@693 -- # key=11cbf308458634a36480e715afd6e3ebf322eed6140935d3 00:24:43.684 21:29:58 -- nvmf/common.sh@693 -- # digest=2 00:24:43.684 21:29:58 -- nvmf/common.sh@694 -- # python - 00:24:43.684 21:29:58 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.L5D 00:24:43.684 21:29:58 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.L5D 00:24:43.684 21:29:58 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.L5D 00:24:43.684 21:29:58 -- host/auth.sh@85 -- # gen_key sha512 64 00:24:43.684 21:29:58 -- host/auth.sh@53 -- # local digest len file key 00:24:43.684 21:29:58 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:43.684 21:29:58 -- host/auth.sh@54 -- # local -A digests 00:24:43.684 21:29:58 -- host/auth.sh@56 -- # digest=sha512 00:24:43.684 21:29:58 -- host/auth.sh@56 -- # len=64 00:24:43.684 21:29:58 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:43.684 21:29:58 -- host/auth.sh@57 -- # key=39293236510b9ecb24007760e36a9d52f3c6d244c0f793d8da5f791d9ed0b044 00:24:43.684 21:29:58 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:24:43.684 21:29:58 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.wZT 00:24:43.684 21:29:58 -- host/auth.sh@59 -- # format_dhchap_key 39293236510b9ecb24007760e36a9d52f3c6d244c0f793d8da5f791d9ed0b044 3 00:24:43.684 21:29:58 -- nvmf/common.sh@708 -- # format_key DHHC-1 39293236510b9ecb24007760e36a9d52f3c6d244c0f793d8da5f791d9ed0b044 3 00:24:43.684 21:29:58 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:43.684 21:29:58 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:43.684 21:29:58 -- nvmf/common.sh@693 -- # key=39293236510b9ecb24007760e36a9d52f3c6d244c0f793d8da5f791d9ed0b044 00:24:43.684 21:29:58 -- nvmf/common.sh@693 -- # digest=3 00:24:43.684 21:29:58 -- nvmf/common.sh@694 -- # python - 00:24:43.684 21:29:58 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.wZT 00:24:43.684 21:29:58 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.wZT 00:24:43.684 21:29:58 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.wZT 00:24:43.684 21:29:58 -- host/auth.sh@87 -- # waitforlisten 1336641 00:24:43.684 21:29:58 -- common/autotest_common.sh@817 -- # '[' -z 1336641 ']' 00:24:43.684 21:29:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.684 21:29:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:43.684 21:29:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.684 21:29:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:43.684 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:24:43.944 21:29:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:43.944 21:29:58 -- common/autotest_common.sh@850 -- # return 0 00:24:43.944 21:29:58 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:43.944 21:29:58 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fWH 00:24:43.944 21:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.944 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:24:43.944 21:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.944 21:29:58 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:43.944 21:29:58 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.YDo 00:24:43.944 21:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.944 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:24:43.944 21:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.944 21:29:58 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:43.944 21:29:58 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Lfk 00:24:43.944 21:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.944 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:24:43.944 21:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.944 21:29:58 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:43.944 21:29:58 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.L5D 00:24:43.944 21:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.944 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:24:43.944 21:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.944 21:29:58 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:43.944 21:29:58 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.wZT 00:24:43.944 21:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.944 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:24:43.944 21:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.944 21:29:58 -- host/auth.sh@92 -- # nvmet_auth_init 00:24:43.944 21:29:58 -- host/auth.sh@35 -- # get_main_ns_ip 00:24:43.944 21:29:58 -- nvmf/common.sh@717 -- # local ip 00:24:43.944 21:29:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:43.944 21:29:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:43.944 21:29:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.944 21:29:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.944 21:29:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:43.944 21:29:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.944 21:29:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:43.944 21:29:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:43.944 21:29:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:43.944 21:29:58 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:43.944 21:29:58 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:43.944 21:29:58 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:43.944 21:29:58 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:43.944 21:29:58 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:43.944 21:29:58 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:43.944 21:29:58 -- nvmf/common.sh@628 -- # local block nvme 00:24:43.944 21:29:58 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:43.944 21:29:58 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:43.944 21:29:58 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:43.944 21:29:58 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:24:46.484 Waiting for block devices as requested 00:24:46.484 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:24:46.484 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:46.745 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:46.745 0000:cb:00.0 (8086 0a54): vfio-pci -> nvme 00:24:47.006 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:47.006 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:24:47.267 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:47.267 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:24:47.528 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:47.528 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:24:47.789 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:47.789 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:24:47.789 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:24:48.049 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:24:48.049 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:24:48.310 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:48.310 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:24:48.571 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:24:48.571 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:24:49.952 21:30:04 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:49.952 21:30:04 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:49.952 21:30:04 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:49.952 21:30:04 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:49.952 21:30:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:49.952 21:30:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:49.952 21:30:04 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:49.952 21:30:04 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:49.952 21:30:04 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:49.952 No valid GPT data, bailing 00:24:49.952 21:30:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:49.952 21:30:04 -- scripts/common.sh@391 -- # pt= 00:24:49.952 21:30:04 -- scripts/common.sh@392 -- # return 1 00:24:49.952 21:30:04 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:49.952 21:30:04 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:49.952 21:30:04 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:49.952 21:30:04 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:24:49.952 21:30:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:49.952 21:30:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:49.952 21:30:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:49.952 21:30:04 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:24:49.952 21:30:04 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:49.952 21:30:04 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:24:49.952 No valid GPT data, bailing 00:24:49.952 21:30:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:49.952 21:30:04 -- scripts/common.sh@391 -- # pt= 00:24:49.952 21:30:04 -- scripts/common.sh@392 -- # return 1 00:24:49.952 21:30:04 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:24:49.952 21:30:04 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:49.952 21:30:04 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme2n1 ]] 00:24:49.952 21:30:04 -- nvmf/common.sh@641 -- # is_block_zoned nvme2n1 00:24:49.952 21:30:04 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:24:49.952 21:30:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:24:49.952 21:30:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:49.952 21:30:04 -- nvmf/common.sh@642 -- # block_in_use nvme2n1 00:24:49.952 21:30:04 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:24:49.952 21:30:04 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:24:49.952 No valid GPT data, bailing 00:24:49.952 21:30:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:24:49.952 21:30:04 -- scripts/common.sh@391 -- # pt= 00:24:49.952 21:30:04 -- scripts/common.sh@392 -- # return 1 00:24:49.952 21:30:04 -- nvmf/common.sh@642 -- # nvme=/dev/nvme2n1 00:24:49.952 21:30:04 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme2n1 ]] 00:24:49.952 21:30:04 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:49.952 21:30:04 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:49.952 21:30:04 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:49.952 21:30:04 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:49.952 21:30:04 -- nvmf/common.sh@656 -- # echo 1 00:24:49.952 21:30:04 -- nvmf/common.sh@657 -- # echo /dev/nvme2n1 00:24:49.952 21:30:04 -- nvmf/common.sh@658 -- # echo 1 00:24:49.952 21:30:04 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:49.952 21:30:04 -- nvmf/common.sh@661 -- # echo tcp 00:24:49.952 21:30:04 -- nvmf/common.sh@662 -- # echo 4420 00:24:49.952 21:30:04 -- nvmf/common.sh@663 -- # echo ipv4 00:24:49.952 21:30:04 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:49.952 21:30:04 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -a 10.0.0.1 -t tcp -s 4420 00:24:49.952 00:24:49.952 Discovery Log Number of Records 2, Generation counter 2 00:24:49.952 =====Discovery Log Entry 0====== 00:24:49.952 trtype: tcp 00:24:49.952 adrfam: ipv4 00:24:49.952 subtype: current discovery subsystem 00:24:49.952 treq: not specified, sq flow control disable supported 00:24:49.952 portid: 1 00:24:49.952 trsvcid: 4420 00:24:49.952 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:49.952 traddr: 10.0.0.1 00:24:49.952 eflags: none 00:24:49.952 sectype: none 00:24:49.952 =====Discovery Log Entry 1====== 00:24:49.952 trtype: tcp 00:24:49.952 adrfam: ipv4 00:24:49.952 subtype: nvme subsystem 00:24:49.952 treq: not specified, sq flow control disable supported 00:24:49.952 portid: 1 00:24:49.952 trsvcid: 4420 00:24:49.952 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:49.952 traddr: 10.0.0.1 00:24:49.952 eflags: none 00:24:49.952 sectype: none 00:24:49.952 21:30:04 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:49.952 21:30:04 -- host/auth.sh@37 -- # echo 0 00:24:49.952 21:30:04 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:49.952 21:30:04 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:49.952 21:30:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:49.952 21:30:04 -- host/auth.sh@44 -- # digest=sha256 00:24:49.952 21:30:04 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:49.952 21:30:04 -- host/auth.sh@44 -- # keyid=1 00:24:49.952 21:30:04 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:49.952 21:30:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:49.952 21:30:04 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:49.952 21:30:04 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:49.952 21:30:04 -- host/auth.sh@100 -- # IFS=, 00:24:49.952 21:30:04 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:24:49.952 21:30:04 -- host/auth.sh@100 -- # IFS=, 00:24:49.952 21:30:04 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:49.952 21:30:04 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:49.952 21:30:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:49.952 21:30:04 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:24:49.952 21:30:04 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:49.952 21:30:04 -- host/auth.sh@68 -- # keyid=1 00:24:49.952 21:30:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:49.952 21:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.952 21:30:04 -- common/autotest_common.sh@10 -- # set +x 00:24:49.952 21:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.952 21:30:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:49.952 21:30:04 -- nvmf/common.sh@717 -- # local ip 00:24:49.952 21:30:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:49.952 21:30:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:49.952 21:30:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.952 21:30:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.952 21:30:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:49.952 21:30:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.952 21:30:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:49.952 21:30:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:49.952 21:30:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:49.952 21:30:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:49.952 21:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.952 21:30:04 -- common/autotest_common.sh@10 -- # set +x 00:24:50.214 nvme0n1 00:24:50.214 21:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.214 21:30:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.214 21:30:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:50.214 21:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.214 21:30:04 -- common/autotest_common.sh@10 -- # set +x 00:24:50.214 21:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.214 21:30:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.214 21:30:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.214 21:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.214 21:30:04 -- common/autotest_common.sh@10 -- # set +x 00:24:50.214 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.214 21:30:05 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:50.214 21:30:05 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:50.214 21:30:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:50.214 21:30:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:50.214 21:30:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:50.214 21:30:05 -- host/auth.sh@44 -- # digest=sha256 00:24:50.214 21:30:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:50.214 21:30:05 -- host/auth.sh@44 -- # keyid=0 00:24:50.214 21:30:05 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:24:50.214 21:30:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:50.214 21:30:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:50.214 21:30:05 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:24:50.214 21:30:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:24:50.214 21:30:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:50.214 21:30:05 -- host/auth.sh@68 -- # digest=sha256 00:24:50.214 21:30:05 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:50.214 21:30:05 -- host/auth.sh@68 -- # keyid=0 00:24:50.214 21:30:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:50.214 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.214 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.214 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.214 21:30:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:50.214 21:30:05 -- nvmf/common.sh@717 -- # local ip 00:24:50.214 21:30:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:50.214 21:30:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:50.214 21:30:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.214 21:30:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.214 21:30:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:50.214 21:30:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.214 21:30:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:50.214 21:30:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:50.214 21:30:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:50.214 21:30:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:50.214 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.214 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.214 nvme0n1 00:24:50.214 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.214 21:30:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.214 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.214 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.214 21:30:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:50.474 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.474 21:30:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.474 21:30:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.474 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.474 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.474 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.474 21:30:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:50.474 21:30:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:50.474 21:30:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:50.474 21:30:05 -- host/auth.sh@44 -- # digest=sha256 00:24:50.474 21:30:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:50.474 21:30:05 -- host/auth.sh@44 -- # keyid=1 00:24:50.474 21:30:05 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:50.474 21:30:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:50.474 21:30:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:50.474 21:30:05 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:50.474 21:30:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:24:50.474 21:30:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:50.474 21:30:05 -- host/auth.sh@68 -- # digest=sha256 00:24:50.474 21:30:05 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:50.474 21:30:05 -- host/auth.sh@68 -- # keyid=1 00:24:50.474 21:30:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:50.474 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.474 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.474 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.474 21:30:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:50.474 21:30:05 -- nvmf/common.sh@717 -- # local ip 00:24:50.474 21:30:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:50.474 21:30:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:50.474 21:30:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.474 21:30:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.474 21:30:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:50.474 21:30:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.474 21:30:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:50.474 21:30:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:50.474 21:30:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:50.474 21:30:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:50.474 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.474 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.474 nvme0n1 00:24:50.474 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.474 21:30:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.474 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.474 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.474 21:30:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:50.474 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.474 21:30:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.474 21:30:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.474 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.474 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.735 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.735 21:30:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:50.735 21:30:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:50.735 21:30:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:50.735 21:30:05 -- host/auth.sh@44 -- # digest=sha256 00:24:50.735 21:30:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:50.735 21:30:05 -- host/auth.sh@44 -- # keyid=2 00:24:50.735 21:30:05 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:24:50.735 21:30:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:50.735 21:30:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:50.735 21:30:05 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:24:50.735 21:30:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:24:50.735 21:30:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:50.735 21:30:05 -- host/auth.sh@68 -- # digest=sha256 00:24:50.735 21:30:05 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:50.735 21:30:05 -- host/auth.sh@68 -- # keyid=2 00:24:50.735 21:30:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:50.735 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.735 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.735 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.735 21:30:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:50.735 21:30:05 -- nvmf/common.sh@717 -- # local ip 00:24:50.735 21:30:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:50.735 21:30:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:50.735 21:30:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.735 21:30:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.735 21:30:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:50.735 21:30:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.735 21:30:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:50.735 21:30:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:50.735 21:30:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:50.735 21:30:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:50.735 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.735 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.735 nvme0n1 00:24:50.735 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.735 21:30:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.735 21:30:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:50.735 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.735 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.735 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.735 21:30:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.735 21:30:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.735 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.735 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.735 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.735 21:30:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:50.735 21:30:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:50.735 21:30:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:50.735 21:30:05 -- host/auth.sh@44 -- # digest=sha256 00:24:50.735 21:30:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:50.735 21:30:05 -- host/auth.sh@44 -- # keyid=3 00:24:50.735 21:30:05 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:24:50.736 21:30:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:50.736 21:30:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:50.736 21:30:05 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:24:50.736 21:30:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:24:50.736 21:30:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:50.736 21:30:05 -- host/auth.sh@68 -- # digest=sha256 00:24:50.736 21:30:05 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:50.736 21:30:05 -- host/auth.sh@68 -- # keyid=3 00:24:50.736 21:30:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:50.736 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.736 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.736 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.736 21:30:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:50.736 21:30:05 -- nvmf/common.sh@717 -- # local ip 00:24:50.736 21:30:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:50.736 21:30:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:50.736 21:30:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.736 21:30:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.736 21:30:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:50.736 21:30:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.736 21:30:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:50.736 21:30:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:50.736 21:30:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:50.736 21:30:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:50.736 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.736 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.997 nvme0n1 00:24:50.997 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.997 21:30:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.997 21:30:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:50.997 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.997 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.997 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.997 21:30:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.997 21:30:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.997 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.997 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.997 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.997 21:30:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:50.997 21:30:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:50.997 21:30:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:50.997 21:30:05 -- host/auth.sh@44 -- # digest=sha256 00:24:50.997 21:30:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:50.997 21:30:05 -- host/auth.sh@44 -- # keyid=4 00:24:50.997 21:30:05 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:24:50.997 21:30:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:50.997 21:30:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:50.997 21:30:05 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:24:50.997 21:30:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:24:50.997 21:30:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:50.997 21:30:05 -- host/auth.sh@68 -- # digest=sha256 00:24:50.997 21:30:05 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:50.997 21:30:05 -- host/auth.sh@68 -- # keyid=4 00:24:50.997 21:30:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:50.997 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.997 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.997 21:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.997 21:30:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:50.997 21:30:05 -- nvmf/common.sh@717 -- # local ip 00:24:50.997 21:30:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:50.997 21:30:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:50.997 21:30:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.997 21:30:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.997 21:30:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:50.997 21:30:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.997 21:30:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:50.997 21:30:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:50.997 21:30:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:50.997 21:30:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:50.997 21:30:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.997 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:24:51.259 nvme0n1 00:24:51.259 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.259 21:30:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.259 21:30:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:51.259 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.260 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.260 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.260 21:30:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.260 21:30:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.260 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.260 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.260 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.260 21:30:06 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:51.260 21:30:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:51.260 21:30:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:51.260 21:30:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:51.260 21:30:06 -- host/auth.sh@44 -- # digest=sha256 00:24:51.260 21:30:06 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:51.260 21:30:06 -- host/auth.sh@44 -- # keyid=0 00:24:51.260 21:30:06 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:24:51.260 21:30:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:51.260 21:30:06 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:51.260 21:30:06 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:24:51.260 21:30:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:24:51.260 21:30:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:51.260 21:30:06 -- host/auth.sh@68 -- # digest=sha256 00:24:51.260 21:30:06 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:51.260 21:30:06 -- host/auth.sh@68 -- # keyid=0 00:24:51.260 21:30:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:51.260 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.260 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.260 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.260 21:30:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:51.260 21:30:06 -- nvmf/common.sh@717 -- # local ip 00:24:51.260 21:30:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:51.260 21:30:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:51.260 21:30:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.260 21:30:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.260 21:30:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:51.260 21:30:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.260 21:30:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:51.260 21:30:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:51.260 21:30:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:51.260 21:30:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:51.260 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.260 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.519 nvme0n1 00:24:51.519 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.519 21:30:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.519 21:30:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:51.519 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.519 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.519 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.519 21:30:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.520 21:30:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.520 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.520 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.520 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.520 21:30:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:51.520 21:30:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:51.520 21:30:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:51.520 21:30:06 -- host/auth.sh@44 -- # digest=sha256 00:24:51.520 21:30:06 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:51.520 21:30:06 -- host/auth.sh@44 -- # keyid=1 00:24:51.520 21:30:06 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:51.520 21:30:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:51.520 21:30:06 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:51.520 21:30:06 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:51.520 21:30:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:24:51.520 21:30:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:51.520 21:30:06 -- host/auth.sh@68 -- # digest=sha256 00:24:51.520 21:30:06 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:51.520 21:30:06 -- host/auth.sh@68 -- # keyid=1 00:24:51.520 21:30:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:51.520 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.520 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.520 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.520 21:30:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:51.520 21:30:06 -- nvmf/common.sh@717 -- # local ip 00:24:51.520 21:30:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:51.520 21:30:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:51.520 21:30:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.520 21:30:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.520 21:30:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:51.520 21:30:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.520 21:30:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:51.520 21:30:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:51.520 21:30:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:51.520 21:30:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:51.520 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.520 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.780 nvme0n1 00:24:51.781 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.781 21:30:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.781 21:30:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:51.781 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.781 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.781 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.781 21:30:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.781 21:30:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.781 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.781 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.781 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.781 21:30:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:51.781 21:30:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:51.781 21:30:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:51.781 21:30:06 -- host/auth.sh@44 -- # digest=sha256 00:24:51.781 21:30:06 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:51.781 21:30:06 -- host/auth.sh@44 -- # keyid=2 00:24:51.781 21:30:06 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:24:51.781 21:30:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:51.781 21:30:06 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:51.781 21:30:06 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:24:51.781 21:30:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:24:51.781 21:30:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:51.781 21:30:06 -- host/auth.sh@68 -- # digest=sha256 00:24:51.781 21:30:06 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:51.781 21:30:06 -- host/auth.sh@68 -- # keyid=2 00:24:51.781 21:30:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:51.781 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.781 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.781 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.781 21:30:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:51.781 21:30:06 -- nvmf/common.sh@717 -- # local ip 00:24:51.781 21:30:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:51.781 21:30:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:51.781 21:30:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.781 21:30:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.781 21:30:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:51.781 21:30:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.781 21:30:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:51.781 21:30:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:51.781 21:30:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:51.781 21:30:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:51.781 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.781 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.781 nvme0n1 00:24:51.781 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.781 21:30:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.781 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.781 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:52.042 21:30:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:52.042 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.042 21:30:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.042 21:30:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.042 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.042 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:52.042 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.042 21:30:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:52.042 21:30:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:52.042 21:30:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:52.042 21:30:06 -- host/auth.sh@44 -- # digest=sha256 00:24:52.042 21:30:06 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.042 21:30:06 -- host/auth.sh@44 -- # keyid=3 00:24:52.042 21:30:06 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:24:52.042 21:30:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:52.042 21:30:06 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:52.042 21:30:06 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:24:52.042 21:30:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:24:52.042 21:30:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:52.042 21:30:06 -- host/auth.sh@68 -- # digest=sha256 00:24:52.042 21:30:06 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:52.043 21:30:06 -- host/auth.sh@68 -- # keyid=3 00:24:52.043 21:30:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:52.043 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.043 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:52.043 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.043 21:30:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:52.043 21:30:06 -- nvmf/common.sh@717 -- # local ip 00:24:52.043 21:30:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:52.043 21:30:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:52.043 21:30:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.043 21:30:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.043 21:30:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:52.043 21:30:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.043 21:30:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:52.043 21:30:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:52.043 21:30:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:52.043 21:30:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:52.043 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.043 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:52.043 nvme0n1 00:24:52.043 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.043 21:30:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.043 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.043 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:24:52.043 21:30:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:52.043 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.303 21:30:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.303 21:30:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.303 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.303 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:52.303 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.303 21:30:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:52.303 21:30:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:52.303 21:30:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:52.303 21:30:07 -- host/auth.sh@44 -- # digest=sha256 00:24:52.303 21:30:07 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.303 21:30:07 -- host/auth.sh@44 -- # keyid=4 00:24:52.303 21:30:07 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:24:52.303 21:30:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:52.303 21:30:07 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:52.303 21:30:07 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:24:52.303 21:30:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:24:52.303 21:30:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:52.303 21:30:07 -- host/auth.sh@68 -- # digest=sha256 00:24:52.303 21:30:07 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:52.303 21:30:07 -- host/auth.sh@68 -- # keyid=4 00:24:52.303 21:30:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:52.303 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.303 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:52.303 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.303 21:30:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:52.303 21:30:07 -- nvmf/common.sh@717 -- # local ip 00:24:52.303 21:30:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:52.303 21:30:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:52.303 21:30:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.303 21:30:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.303 21:30:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:52.303 21:30:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.303 21:30:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:52.303 21:30:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:52.303 21:30:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:52.303 21:30:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:52.303 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.303 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:52.303 nvme0n1 00:24:52.303 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.303 21:30:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.303 21:30:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:52.303 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.303 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:52.303 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.303 21:30:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.303 21:30:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.303 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.303 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:52.303 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.303 21:30:07 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.303 21:30:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:52.303 21:30:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:52.303 21:30:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:52.303 21:30:07 -- host/auth.sh@44 -- # digest=sha256 00:24:52.303 21:30:07 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:52.303 21:30:07 -- host/auth.sh@44 -- # keyid=0 00:24:52.303 21:30:07 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:24:52.303 21:30:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:52.303 21:30:07 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:52.303 21:30:07 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:24:52.303 21:30:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:24:52.303 21:30:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:52.303 21:30:07 -- host/auth.sh@68 -- # digest=sha256 00:24:52.303 21:30:07 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:52.303 21:30:07 -- host/auth.sh@68 -- # keyid=0 00:24:52.303 21:30:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:52.303 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.303 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:52.303 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.303 21:30:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:52.303 21:30:07 -- nvmf/common.sh@717 -- # local ip 00:24:52.303 21:30:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:52.303 21:30:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:52.303 21:30:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.303 21:30:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.303 21:30:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:52.303 21:30:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.303 21:30:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:52.303 21:30:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:52.303 21:30:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:52.303 21:30:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:52.303 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.303 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:52.562 nvme0n1 00:24:52.562 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.562 21:30:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.562 21:30:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:52.562 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.562 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:52.562 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.823 21:30:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.823 21:30:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.823 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.823 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:52.823 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.823 21:30:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:52.823 21:30:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:52.823 21:30:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:52.823 21:30:07 -- host/auth.sh@44 -- # digest=sha256 00:24:52.823 21:30:07 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:52.823 21:30:07 -- host/auth.sh@44 -- # keyid=1 00:24:52.823 21:30:07 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:52.823 21:30:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:52.823 21:30:07 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:52.823 21:30:07 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:52.823 21:30:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:24:52.823 21:30:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:52.823 21:30:07 -- host/auth.sh@68 -- # digest=sha256 00:24:52.823 21:30:07 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:52.823 21:30:07 -- host/auth.sh@68 -- # keyid=1 00:24:52.823 21:30:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:52.823 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.823 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:52.823 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.823 21:30:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:52.823 21:30:07 -- nvmf/common.sh@717 -- # local ip 00:24:52.823 21:30:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:52.823 21:30:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:52.823 21:30:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.823 21:30:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.823 21:30:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:52.823 21:30:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.823 21:30:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:52.823 21:30:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:52.823 21:30:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:52.823 21:30:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:52.823 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.823 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:53.085 nvme0n1 00:24:53.085 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.085 21:30:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.085 21:30:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:53.085 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.085 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:53.085 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.085 21:30:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.085 21:30:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.085 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.085 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:53.085 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.085 21:30:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:53.085 21:30:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:53.085 21:30:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:53.085 21:30:07 -- host/auth.sh@44 -- # digest=sha256 00:24:53.085 21:30:07 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:53.085 21:30:07 -- host/auth.sh@44 -- # keyid=2 00:24:53.085 21:30:07 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:24:53.085 21:30:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:53.085 21:30:07 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:53.085 21:30:07 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:24:53.085 21:30:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:24:53.085 21:30:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:53.085 21:30:07 -- host/auth.sh@68 -- # digest=sha256 00:24:53.085 21:30:07 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:53.085 21:30:07 -- host/auth.sh@68 -- # keyid=2 00:24:53.085 21:30:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:53.085 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.085 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:53.085 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.085 21:30:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:53.085 21:30:07 -- nvmf/common.sh@717 -- # local ip 00:24:53.085 21:30:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:53.085 21:30:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:53.085 21:30:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.085 21:30:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.085 21:30:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:53.085 21:30:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.085 21:30:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:53.085 21:30:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:53.085 21:30:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:53.085 21:30:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:53.085 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.085 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:24:53.346 nvme0n1 00:24:53.346 21:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.346 21:30:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.346 21:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.346 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:24:53.346 21:30:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:53.346 21:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.346 21:30:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.346 21:30:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.346 21:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.346 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:24:53.346 21:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.346 21:30:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:53.346 21:30:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:53.346 21:30:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:53.346 21:30:08 -- host/auth.sh@44 -- # digest=sha256 00:24:53.346 21:30:08 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:53.346 21:30:08 -- host/auth.sh@44 -- # keyid=3 00:24:53.346 21:30:08 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:24:53.346 21:30:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:53.346 21:30:08 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:53.346 21:30:08 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:24:53.346 21:30:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:24:53.346 21:30:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:53.346 21:30:08 -- host/auth.sh@68 -- # digest=sha256 00:24:53.346 21:30:08 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:53.346 21:30:08 -- host/auth.sh@68 -- # keyid=3 00:24:53.346 21:30:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:53.346 21:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.346 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:24:53.346 21:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.346 21:30:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:53.346 21:30:08 -- nvmf/common.sh@717 -- # local ip 00:24:53.346 21:30:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:53.346 21:30:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:53.346 21:30:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.346 21:30:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.346 21:30:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:53.346 21:30:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.346 21:30:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:53.346 21:30:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:53.346 21:30:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:53.346 21:30:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:53.346 21:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.346 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:24:53.611 nvme0n1 00:24:53.611 21:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.611 21:30:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.611 21:30:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:53.611 21:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.611 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:24:53.611 21:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.611 21:30:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.611 21:30:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.611 21:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.611 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:24:53.611 21:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.611 21:30:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:53.611 21:30:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:53.611 21:30:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:53.611 21:30:08 -- host/auth.sh@44 -- # digest=sha256 00:24:53.611 21:30:08 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:53.611 21:30:08 -- host/auth.sh@44 -- # keyid=4 00:24:53.611 21:30:08 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:24:53.611 21:30:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:53.611 21:30:08 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:53.611 21:30:08 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:24:53.611 21:30:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:24:53.611 21:30:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:53.611 21:30:08 -- host/auth.sh@68 -- # digest=sha256 00:24:53.611 21:30:08 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:53.611 21:30:08 -- host/auth.sh@68 -- # keyid=4 00:24:53.611 21:30:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:53.611 21:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.611 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:24:53.611 21:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.611 21:30:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:53.611 21:30:08 -- nvmf/common.sh@717 -- # local ip 00:24:53.611 21:30:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:53.611 21:30:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:53.611 21:30:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.611 21:30:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.611 21:30:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:53.611 21:30:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.611 21:30:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:53.611 21:30:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:53.611 21:30:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:53.611 21:30:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:53.611 21:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.611 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:24:53.873 nvme0n1 00:24:53.873 21:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.873 21:30:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.873 21:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.873 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:24:53.873 21:30:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:53.873 21:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.873 21:30:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.873 21:30:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.873 21:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.873 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:24:53.873 21:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.873 21:30:08 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:53.873 21:30:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:53.873 21:30:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:53.873 21:30:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:53.873 21:30:08 -- host/auth.sh@44 -- # digest=sha256 00:24:53.873 21:30:08 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:53.873 21:30:08 -- host/auth.sh@44 -- # keyid=0 00:24:53.873 21:30:08 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:24:53.873 21:30:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:53.873 21:30:08 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:53.873 21:30:08 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:24:53.873 21:30:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:24:53.873 21:30:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:53.873 21:30:08 -- host/auth.sh@68 -- # digest=sha256 00:24:53.873 21:30:08 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:53.873 21:30:08 -- host/auth.sh@68 -- # keyid=0 00:24:53.873 21:30:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:53.873 21:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.873 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:24:53.873 21:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.873 21:30:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:53.873 21:30:08 -- nvmf/common.sh@717 -- # local ip 00:24:53.873 21:30:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:53.873 21:30:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:53.873 21:30:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.873 21:30:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.873 21:30:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:53.873 21:30:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.873 21:30:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:53.873 21:30:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:53.873 21:30:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:53.873 21:30:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:53.873 21:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.873 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:24:54.443 nvme0n1 00:24:54.443 21:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.443 21:30:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.443 21:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.443 21:30:09 -- common/autotest_common.sh@10 -- # set +x 00:24:54.443 21:30:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:54.443 21:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.443 21:30:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.443 21:30:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.443 21:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.443 21:30:09 -- common/autotest_common.sh@10 -- # set +x 00:24:54.443 21:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.443 21:30:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:54.443 21:30:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:54.443 21:30:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:54.443 21:30:09 -- host/auth.sh@44 -- # digest=sha256 00:24:54.443 21:30:09 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:54.443 21:30:09 -- host/auth.sh@44 -- # keyid=1 00:24:54.443 21:30:09 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:54.443 21:30:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:54.443 21:30:09 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:54.443 21:30:09 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:54.443 21:30:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:24:54.443 21:30:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:54.443 21:30:09 -- host/auth.sh@68 -- # digest=sha256 00:24:54.443 21:30:09 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:54.443 21:30:09 -- host/auth.sh@68 -- # keyid=1 00:24:54.443 21:30:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:54.443 21:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.443 21:30:09 -- common/autotest_common.sh@10 -- # set +x 00:24:54.443 21:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.443 21:30:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:54.443 21:30:09 -- nvmf/common.sh@717 -- # local ip 00:24:54.443 21:30:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:54.443 21:30:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:54.443 21:30:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.443 21:30:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.443 21:30:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:54.443 21:30:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.443 21:30:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:54.443 21:30:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:54.443 21:30:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:54.444 21:30:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:54.444 21:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.444 21:30:09 -- common/autotest_common.sh@10 -- # set +x 00:24:54.703 nvme0n1 00:24:54.703 21:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.703 21:30:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.703 21:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.703 21:30:09 -- common/autotest_common.sh@10 -- # set +x 00:24:54.703 21:30:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:54.703 21:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.703 21:30:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.703 21:30:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.703 21:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.703 21:30:09 -- common/autotest_common.sh@10 -- # set +x 00:24:54.703 21:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.703 21:30:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:54.703 21:30:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:54.703 21:30:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:54.703 21:30:09 -- host/auth.sh@44 -- # digest=sha256 00:24:54.703 21:30:09 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:54.703 21:30:09 -- host/auth.sh@44 -- # keyid=2 00:24:54.703 21:30:09 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:24:54.703 21:30:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:54.703 21:30:09 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:54.703 21:30:09 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:24:54.703 21:30:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:24:54.703 21:30:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:54.703 21:30:09 -- host/auth.sh@68 -- # digest=sha256 00:24:54.703 21:30:09 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:54.703 21:30:09 -- host/auth.sh@68 -- # keyid=2 00:24:54.703 21:30:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:54.703 21:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.703 21:30:09 -- common/autotest_common.sh@10 -- # set +x 00:24:54.703 21:30:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.703 21:30:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:54.703 21:30:09 -- nvmf/common.sh@717 -- # local ip 00:24:54.703 21:30:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:54.703 21:30:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:54.703 21:30:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.703 21:30:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.703 21:30:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:54.703 21:30:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.703 21:30:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:54.703 21:30:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:54.703 21:30:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:54.703 21:30:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:54.703 21:30:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.703 21:30:09 -- common/autotest_common.sh@10 -- # set +x 00:24:55.270 nvme0n1 00:24:55.270 21:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.270 21:30:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.270 21:30:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:55.270 21:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.270 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.270 21:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.270 21:30:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.270 21:30:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.270 21:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.270 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.270 21:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.270 21:30:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:55.270 21:30:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:55.270 21:30:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:55.270 21:30:10 -- host/auth.sh@44 -- # digest=sha256 00:24:55.270 21:30:10 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:55.270 21:30:10 -- host/auth.sh@44 -- # keyid=3 00:24:55.270 21:30:10 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:24:55.270 21:30:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:55.270 21:30:10 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:55.270 21:30:10 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:24:55.270 21:30:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:24:55.270 21:30:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:55.270 21:30:10 -- host/auth.sh@68 -- # digest=sha256 00:24:55.270 21:30:10 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:55.270 21:30:10 -- host/auth.sh@68 -- # keyid=3 00:24:55.270 21:30:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:55.270 21:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.270 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.270 21:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.270 21:30:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:55.270 21:30:10 -- nvmf/common.sh@717 -- # local ip 00:24:55.270 21:30:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:55.270 21:30:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:55.270 21:30:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.270 21:30:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.270 21:30:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:55.270 21:30:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.270 21:30:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:55.270 21:30:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:55.270 21:30:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:55.270 21:30:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:55.271 21:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.271 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.528 nvme0n1 00:24:55.528 21:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.528 21:30:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.528 21:30:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:55.528 21:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.528 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.528 21:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.528 21:30:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.528 21:30:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.528 21:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.528 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.528 21:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.528 21:30:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:55.528 21:30:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:55.528 21:30:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:55.528 21:30:10 -- host/auth.sh@44 -- # digest=sha256 00:24:55.528 21:30:10 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:55.528 21:30:10 -- host/auth.sh@44 -- # keyid=4 00:24:55.528 21:30:10 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:24:55.528 21:30:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:55.528 21:30:10 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:55.528 21:30:10 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:24:55.528 21:30:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:24:55.528 21:30:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:55.528 21:30:10 -- host/auth.sh@68 -- # digest=sha256 00:24:55.528 21:30:10 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:55.528 21:30:10 -- host/auth.sh@68 -- # keyid=4 00:24:55.528 21:30:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:55.528 21:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.528 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.785 21:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.785 21:30:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:55.785 21:30:10 -- nvmf/common.sh@717 -- # local ip 00:24:55.785 21:30:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:55.785 21:30:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:55.785 21:30:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.785 21:30:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.785 21:30:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:55.785 21:30:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.785 21:30:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:55.785 21:30:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:55.785 21:30:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:55.785 21:30:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:55.785 21:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.785 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:56.046 nvme0n1 00:24:56.046 21:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.046 21:30:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.046 21:30:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:56.046 21:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.046 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:56.046 21:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.046 21:30:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.046 21:30:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.046 21:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.046 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:56.046 21:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.046 21:30:10 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:56.046 21:30:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:56.046 21:30:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:56.046 21:30:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:56.046 21:30:10 -- host/auth.sh@44 -- # digest=sha256 00:24:56.046 21:30:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:56.046 21:30:10 -- host/auth.sh@44 -- # keyid=0 00:24:56.046 21:30:10 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:24:56.046 21:30:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:56.046 21:30:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:56.046 21:30:10 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:24:56.046 21:30:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:24:56.046 21:30:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:56.046 21:30:10 -- host/auth.sh@68 -- # digest=sha256 00:24:56.046 21:30:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:56.046 21:30:10 -- host/auth.sh@68 -- # keyid=0 00:24:56.046 21:30:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:56.046 21:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.046 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:56.046 21:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.046 21:30:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:56.046 21:30:10 -- nvmf/common.sh@717 -- # local ip 00:24:56.046 21:30:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:56.046 21:30:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:56.046 21:30:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.046 21:30:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.046 21:30:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:56.046 21:30:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.046 21:30:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:56.046 21:30:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:56.046 21:30:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:56.046 21:30:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:56.046 21:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.046 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:24:56.621 nvme0n1 00:24:56.621 21:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.621 21:30:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.621 21:30:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:56.621 21:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.621 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:24:56.621 21:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.621 21:30:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.621 21:30:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.621 21:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.621 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:24:56.882 21:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.882 21:30:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:56.882 21:30:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:56.882 21:30:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:56.882 21:30:11 -- host/auth.sh@44 -- # digest=sha256 00:24:56.882 21:30:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:56.882 21:30:11 -- host/auth.sh@44 -- # keyid=1 00:24:56.882 21:30:11 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:56.882 21:30:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:56.882 21:30:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:56.882 21:30:11 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:56.882 21:30:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:24:56.883 21:30:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:56.883 21:30:11 -- host/auth.sh@68 -- # digest=sha256 00:24:56.883 21:30:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:56.883 21:30:11 -- host/auth.sh@68 -- # keyid=1 00:24:56.883 21:30:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:56.883 21:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.883 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:24:56.883 21:30:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.883 21:30:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:56.883 21:30:11 -- nvmf/common.sh@717 -- # local ip 00:24:56.883 21:30:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:56.883 21:30:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:56.883 21:30:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.883 21:30:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.883 21:30:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:56.883 21:30:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.883 21:30:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:56.883 21:30:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:56.883 21:30:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:56.883 21:30:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:56.883 21:30:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.883 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:24:57.448 nvme0n1 00:24:57.448 21:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.448 21:30:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.448 21:30:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:57.448 21:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.448 21:30:12 -- common/autotest_common.sh@10 -- # set +x 00:24:57.448 21:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.448 21:30:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.448 21:30:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.448 21:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.448 21:30:12 -- common/autotest_common.sh@10 -- # set +x 00:24:57.448 21:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.448 21:30:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:57.448 21:30:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:57.448 21:30:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:57.448 21:30:12 -- host/auth.sh@44 -- # digest=sha256 00:24:57.448 21:30:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.448 21:30:12 -- host/auth.sh@44 -- # keyid=2 00:24:57.448 21:30:12 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:24:57.448 21:30:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:57.448 21:30:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:57.448 21:30:12 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:24:57.448 21:30:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:24:57.448 21:30:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:57.448 21:30:12 -- host/auth.sh@68 -- # digest=sha256 00:24:57.448 21:30:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:57.448 21:30:12 -- host/auth.sh@68 -- # keyid=2 00:24:57.448 21:30:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:57.448 21:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.448 21:30:12 -- common/autotest_common.sh@10 -- # set +x 00:24:57.448 21:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.448 21:30:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:57.448 21:30:12 -- nvmf/common.sh@717 -- # local ip 00:24:57.448 21:30:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:57.448 21:30:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:57.448 21:30:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.448 21:30:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.448 21:30:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:57.448 21:30:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.448 21:30:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:57.448 21:30:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:57.448 21:30:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:57.448 21:30:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:57.448 21:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.448 21:30:12 -- common/autotest_common.sh@10 -- # set +x 00:24:58.016 nvme0n1 00:24:58.016 21:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.016 21:30:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.016 21:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.016 21:30:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:58.016 21:30:12 -- common/autotest_common.sh@10 -- # set +x 00:24:58.016 21:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.016 21:30:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.016 21:30:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.016 21:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.016 21:30:12 -- common/autotest_common.sh@10 -- # set +x 00:24:58.016 21:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.016 21:30:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:58.016 21:30:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:58.016 21:30:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:58.016 21:30:12 -- host/auth.sh@44 -- # digest=sha256 00:24:58.016 21:30:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.016 21:30:12 -- host/auth.sh@44 -- # keyid=3 00:24:58.016 21:30:12 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:24:58.016 21:30:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:58.016 21:30:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:58.016 21:30:12 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:24:58.016 21:30:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:24:58.016 21:30:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:58.016 21:30:12 -- host/auth.sh@68 -- # digest=sha256 00:24:58.016 21:30:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:58.016 21:30:12 -- host/auth.sh@68 -- # keyid=3 00:24:58.016 21:30:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:58.016 21:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.016 21:30:12 -- common/autotest_common.sh@10 -- # set +x 00:24:58.016 21:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.016 21:30:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:58.016 21:30:12 -- nvmf/common.sh@717 -- # local ip 00:24:58.016 21:30:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:58.016 21:30:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:58.016 21:30:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.016 21:30:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.016 21:30:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:58.016 21:30:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.016 21:30:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:58.016 21:30:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:58.016 21:30:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:58.016 21:30:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:58.016 21:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.016 21:30:12 -- common/autotest_common.sh@10 -- # set +x 00:24:58.636 nvme0n1 00:24:58.636 21:30:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.636 21:30:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.636 21:30:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.636 21:30:13 -- common/autotest_common.sh@10 -- # set +x 00:24:58.636 21:30:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:58.636 21:30:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.636 21:30:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.636 21:30:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.636 21:30:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.636 21:30:13 -- common/autotest_common.sh@10 -- # set +x 00:24:58.893 21:30:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.893 21:30:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:58.893 21:30:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:58.893 21:30:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:58.893 21:30:13 -- host/auth.sh@44 -- # digest=sha256 00:24:58.893 21:30:13 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.893 21:30:13 -- host/auth.sh@44 -- # keyid=4 00:24:58.893 21:30:13 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:24:58.893 21:30:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:58.893 21:30:13 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:58.893 21:30:13 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:24:58.893 21:30:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:24:58.893 21:30:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:58.893 21:30:13 -- host/auth.sh@68 -- # digest=sha256 00:24:58.893 21:30:13 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:58.893 21:30:13 -- host/auth.sh@68 -- # keyid=4 00:24:58.893 21:30:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:58.893 21:30:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.893 21:30:13 -- common/autotest_common.sh@10 -- # set +x 00:24:58.893 21:30:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.893 21:30:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:58.893 21:30:13 -- nvmf/common.sh@717 -- # local ip 00:24:58.893 21:30:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:58.893 21:30:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:58.893 21:30:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.893 21:30:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.893 21:30:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:58.893 21:30:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.893 21:30:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:58.893 21:30:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:58.893 21:30:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:58.893 21:30:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.893 21:30:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.893 21:30:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.459 nvme0n1 00:24:59.459 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.459 21:30:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.459 21:30:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:59.459 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.459 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:24:59.459 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.459 21:30:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.459 21:30:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.459 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.459 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:24:59.459 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.459 21:30:14 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:59.459 21:30:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.459 21:30:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:59.459 21:30:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:59.459 21:30:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:59.459 21:30:14 -- host/auth.sh@44 -- # digest=sha384 00:24:59.459 21:30:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.459 21:30:14 -- host/auth.sh@44 -- # keyid=0 00:24:59.459 21:30:14 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:24:59.459 21:30:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:59.459 21:30:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:59.459 21:30:14 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:24:59.459 21:30:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:24:59.459 21:30:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:59.459 21:30:14 -- host/auth.sh@68 -- # digest=sha384 00:24:59.459 21:30:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:59.459 21:30:14 -- host/auth.sh@68 -- # keyid=0 00:24:59.459 21:30:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:59.459 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.459 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:24:59.459 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.459 21:30:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:59.459 21:30:14 -- nvmf/common.sh@717 -- # local ip 00:24:59.459 21:30:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:59.459 21:30:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:59.459 21:30:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.459 21:30:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.459 21:30:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:59.459 21:30:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.459 21:30:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:59.459 21:30:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:59.459 21:30:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:59.460 21:30:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:59.460 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.460 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:24:59.720 nvme0n1 00:24:59.720 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.720 21:30:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.720 21:30:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:59.720 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.720 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:24:59.720 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.720 21:30:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.720 21:30:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.720 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.720 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:24:59.720 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.721 21:30:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:59.721 21:30:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:59.721 21:30:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:59.721 21:30:14 -- host/auth.sh@44 -- # digest=sha384 00:24:59.721 21:30:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.721 21:30:14 -- host/auth.sh@44 -- # keyid=1 00:24:59.721 21:30:14 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:59.721 21:30:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:59.721 21:30:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:59.721 21:30:14 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:24:59.721 21:30:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:24:59.721 21:30:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:59.721 21:30:14 -- host/auth.sh@68 -- # digest=sha384 00:24:59.721 21:30:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:59.721 21:30:14 -- host/auth.sh@68 -- # keyid=1 00:24:59.721 21:30:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:59.721 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.721 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:24:59.721 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.721 21:30:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:59.721 21:30:14 -- nvmf/common.sh@717 -- # local ip 00:24:59.721 21:30:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:59.721 21:30:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:59.721 21:30:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.721 21:30:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.721 21:30:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:59.721 21:30:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.721 21:30:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:59.721 21:30:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:59.721 21:30:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:59.721 21:30:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:59.721 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.721 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:24:59.721 nvme0n1 00:24:59.721 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.721 21:30:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.721 21:30:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:59.721 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.721 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:24:59.721 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.721 21:30:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.721 21:30:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.721 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.721 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:25:00.034 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.034 21:30:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:00.034 21:30:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:00.034 21:30:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:00.034 21:30:14 -- host/auth.sh@44 -- # digest=sha384 00:25:00.034 21:30:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.034 21:30:14 -- host/auth.sh@44 -- # keyid=2 00:25:00.034 21:30:14 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:00.034 21:30:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:00.034 21:30:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:00.034 21:30:14 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:00.034 21:30:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:25:00.034 21:30:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:00.034 21:30:14 -- host/auth.sh@68 -- # digest=sha384 00:25:00.034 21:30:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:00.034 21:30:14 -- host/auth.sh@68 -- # keyid=2 00:25:00.034 21:30:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.034 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.034 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:25:00.034 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.034 21:30:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:00.034 21:30:14 -- nvmf/common.sh@717 -- # local ip 00:25:00.034 21:30:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.034 21:30:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.034 21:30:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.034 21:30:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.034 21:30:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.034 21:30:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.034 21:30:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.034 21:30:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.034 21:30:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.034 21:30:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:00.034 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.034 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:25:00.034 nvme0n1 00:25:00.034 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.034 21:30:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.034 21:30:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:00.034 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.034 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:25:00.034 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.034 21:30:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.034 21:30:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.034 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.034 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:25:00.034 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.034 21:30:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:00.034 21:30:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:00.034 21:30:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:00.034 21:30:14 -- host/auth.sh@44 -- # digest=sha384 00:25:00.034 21:30:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.035 21:30:14 -- host/auth.sh@44 -- # keyid=3 00:25:00.035 21:30:14 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:00.035 21:30:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:00.035 21:30:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:00.035 21:30:14 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:00.035 21:30:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:25:00.035 21:30:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:00.035 21:30:14 -- host/auth.sh@68 -- # digest=sha384 00:25:00.035 21:30:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:00.035 21:30:14 -- host/auth.sh@68 -- # keyid=3 00:25:00.035 21:30:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.035 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.035 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:25:00.035 21:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.035 21:30:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:00.035 21:30:14 -- nvmf/common.sh@717 -- # local ip 00:25:00.035 21:30:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.035 21:30:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.035 21:30:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.035 21:30:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.035 21:30:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.035 21:30:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.035 21:30:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.035 21:30:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.035 21:30:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.035 21:30:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:00.035 21:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.035 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:25:00.296 nvme0n1 00:25:00.296 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.296 21:30:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.296 21:30:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:00.296 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.297 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.297 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.297 21:30:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.297 21:30:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.297 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.297 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.297 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.297 21:30:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:00.297 21:30:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:00.297 21:30:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:00.297 21:30:15 -- host/auth.sh@44 -- # digest=sha384 00:25:00.297 21:30:15 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.297 21:30:15 -- host/auth.sh@44 -- # keyid=4 00:25:00.297 21:30:15 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:00.297 21:30:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:00.297 21:30:15 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:00.297 21:30:15 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:00.297 21:30:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:25:00.297 21:30:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:00.297 21:30:15 -- host/auth.sh@68 -- # digest=sha384 00:25:00.297 21:30:15 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:00.297 21:30:15 -- host/auth.sh@68 -- # keyid=4 00:25:00.297 21:30:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.297 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.297 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.297 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.297 21:30:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:00.297 21:30:15 -- nvmf/common.sh@717 -- # local ip 00:25:00.297 21:30:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.297 21:30:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.297 21:30:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.297 21:30:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.297 21:30:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.297 21:30:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.297 21:30:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.297 21:30:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.297 21:30:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.297 21:30:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.297 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.297 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.297 nvme0n1 00:25:00.297 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.557 21:30:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.557 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.557 21:30:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:00.557 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.557 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.557 21:30:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.557 21:30:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.557 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.557 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.557 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.557 21:30:15 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.557 21:30:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:00.557 21:30:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:00.557 21:30:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:00.557 21:30:15 -- host/auth.sh@44 -- # digest=sha384 00:25:00.557 21:30:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.557 21:30:15 -- host/auth.sh@44 -- # keyid=0 00:25:00.557 21:30:15 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:00.557 21:30:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:00.557 21:30:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:00.557 21:30:15 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:00.557 21:30:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:25:00.557 21:30:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:00.557 21:30:15 -- host/auth.sh@68 -- # digest=sha384 00:25:00.557 21:30:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:00.557 21:30:15 -- host/auth.sh@68 -- # keyid=0 00:25:00.557 21:30:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:00.557 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.557 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.557 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.557 21:30:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:00.557 21:30:15 -- nvmf/common.sh@717 -- # local ip 00:25:00.557 21:30:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.557 21:30:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.557 21:30:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.557 21:30:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.557 21:30:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.557 21:30:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.557 21:30:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.557 21:30:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.557 21:30:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.557 21:30:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:00.557 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.557 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.557 nvme0n1 00:25:00.557 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.557 21:30:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.557 21:30:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:00.557 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.557 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.557 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.557 21:30:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.557 21:30:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.557 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.557 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.817 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.817 21:30:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:00.817 21:30:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:00.817 21:30:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:00.817 21:30:15 -- host/auth.sh@44 -- # digest=sha384 00:25:00.817 21:30:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.817 21:30:15 -- host/auth.sh@44 -- # keyid=1 00:25:00.817 21:30:15 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:00.817 21:30:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:00.817 21:30:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:00.817 21:30:15 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:00.817 21:30:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:25:00.817 21:30:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:00.817 21:30:15 -- host/auth.sh@68 -- # digest=sha384 00:25:00.817 21:30:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:00.817 21:30:15 -- host/auth.sh@68 -- # keyid=1 00:25:00.817 21:30:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:00.817 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.817 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.817 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.817 21:30:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:00.817 21:30:15 -- nvmf/common.sh@717 -- # local ip 00:25:00.817 21:30:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.817 21:30:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.817 21:30:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.817 21:30:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.817 21:30:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.817 21:30:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.817 21:30:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.817 21:30:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.817 21:30:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.817 21:30:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:00.817 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.817 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.817 nvme0n1 00:25:00.817 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.817 21:30:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.817 21:30:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:00.817 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.817 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.817 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.817 21:30:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.817 21:30:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.817 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.817 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.817 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.817 21:30:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:00.817 21:30:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:00.817 21:30:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:00.817 21:30:15 -- host/auth.sh@44 -- # digest=sha384 00:25:00.817 21:30:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.817 21:30:15 -- host/auth.sh@44 -- # keyid=2 00:25:00.817 21:30:15 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:00.817 21:30:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:00.817 21:30:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:00.817 21:30:15 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:00.817 21:30:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:25:00.817 21:30:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:00.817 21:30:15 -- host/auth.sh@68 -- # digest=sha384 00:25:00.817 21:30:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:00.817 21:30:15 -- host/auth.sh@68 -- # keyid=2 00:25:00.817 21:30:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:00.817 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.817 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:00.817 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.817 21:30:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:00.817 21:30:15 -- nvmf/common.sh@717 -- # local ip 00:25:00.817 21:30:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:00.817 21:30:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:00.817 21:30:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.817 21:30:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.817 21:30:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:00.817 21:30:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.817 21:30:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:00.817 21:30:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:00.817 21:30:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:00.818 21:30:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:00.818 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.818 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:01.076 nvme0n1 00:25:01.076 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.076 21:30:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.076 21:30:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:01.076 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.076 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:01.076 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.076 21:30:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.076 21:30:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.076 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.076 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:01.076 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.076 21:30:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:01.076 21:30:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:01.076 21:30:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:01.076 21:30:15 -- host/auth.sh@44 -- # digest=sha384 00:25:01.076 21:30:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.076 21:30:15 -- host/auth.sh@44 -- # keyid=3 00:25:01.076 21:30:15 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:01.076 21:30:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:01.076 21:30:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:01.076 21:30:15 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:01.076 21:30:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:25:01.076 21:30:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:01.076 21:30:15 -- host/auth.sh@68 -- # digest=sha384 00:25:01.076 21:30:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:01.076 21:30:15 -- host/auth.sh@68 -- # keyid=3 00:25:01.077 21:30:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.077 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.077 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:01.077 21:30:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.077 21:30:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:01.077 21:30:15 -- nvmf/common.sh@717 -- # local ip 00:25:01.077 21:30:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:01.077 21:30:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:01.077 21:30:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.077 21:30:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.077 21:30:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:01.077 21:30:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.077 21:30:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:01.077 21:30:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:01.077 21:30:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:01.077 21:30:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:01.077 21:30:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.077 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:25:01.335 nvme0n1 00:25:01.336 21:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.336 21:30:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.336 21:30:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:01.336 21:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.336 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:25:01.336 21:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.336 21:30:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.336 21:30:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.336 21:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.336 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:25:01.336 21:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.336 21:30:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:01.336 21:30:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:01.336 21:30:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:01.336 21:30:16 -- host/auth.sh@44 -- # digest=sha384 00:25:01.336 21:30:16 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.336 21:30:16 -- host/auth.sh@44 -- # keyid=4 00:25:01.336 21:30:16 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:01.336 21:30:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:01.336 21:30:16 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:01.336 21:30:16 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:01.336 21:30:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:25:01.336 21:30:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:01.336 21:30:16 -- host/auth.sh@68 -- # digest=sha384 00:25:01.336 21:30:16 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:01.336 21:30:16 -- host/auth.sh@68 -- # keyid=4 00:25:01.336 21:30:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.336 21:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.336 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:25:01.336 21:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.336 21:30:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:01.336 21:30:16 -- nvmf/common.sh@717 -- # local ip 00:25:01.336 21:30:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:01.336 21:30:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:01.336 21:30:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.336 21:30:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.336 21:30:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:01.336 21:30:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.336 21:30:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:01.336 21:30:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:01.336 21:30:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:01.336 21:30:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:01.336 21:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.336 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:25:01.594 nvme0n1 00:25:01.594 21:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.594 21:30:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.594 21:30:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:01.594 21:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.594 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:25:01.594 21:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.594 21:30:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.594 21:30:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.594 21:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.594 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:25:01.594 21:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.594 21:30:16 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:01.594 21:30:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:01.594 21:30:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:01.594 21:30:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:01.594 21:30:16 -- host/auth.sh@44 -- # digest=sha384 00:25:01.594 21:30:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.594 21:30:16 -- host/auth.sh@44 -- # keyid=0 00:25:01.594 21:30:16 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:01.594 21:30:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:01.594 21:30:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:01.594 21:30:16 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:01.594 21:30:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:25:01.594 21:30:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:01.594 21:30:16 -- host/auth.sh@68 -- # digest=sha384 00:25:01.594 21:30:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:01.594 21:30:16 -- host/auth.sh@68 -- # keyid=0 00:25:01.594 21:30:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:01.594 21:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.594 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:25:01.594 21:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.594 21:30:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:01.594 21:30:16 -- nvmf/common.sh@717 -- # local ip 00:25:01.594 21:30:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:01.594 21:30:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:01.594 21:30:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.594 21:30:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.594 21:30:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:01.594 21:30:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.594 21:30:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:01.594 21:30:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:01.594 21:30:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:01.594 21:30:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:01.594 21:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.594 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:25:01.854 nvme0n1 00:25:01.854 21:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.854 21:30:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.854 21:30:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:01.854 21:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.854 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:25:01.854 21:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.854 21:30:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.854 21:30:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.854 21:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.854 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:25:01.854 21:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.854 21:30:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:01.854 21:30:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:01.854 21:30:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:01.854 21:30:16 -- host/auth.sh@44 -- # digest=sha384 00:25:01.854 21:30:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.854 21:30:16 -- host/auth.sh@44 -- # keyid=1 00:25:01.854 21:30:16 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:01.854 21:30:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:01.854 21:30:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:01.854 21:30:16 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:01.854 21:30:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:25:01.854 21:30:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:01.854 21:30:16 -- host/auth.sh@68 -- # digest=sha384 00:25:01.854 21:30:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:01.854 21:30:16 -- host/auth.sh@68 -- # keyid=1 00:25:01.854 21:30:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:01.854 21:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.854 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:25:01.854 21:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.854 21:30:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:01.854 21:30:16 -- nvmf/common.sh@717 -- # local ip 00:25:01.854 21:30:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:01.854 21:30:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:01.854 21:30:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.854 21:30:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.854 21:30:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:01.854 21:30:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.854 21:30:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:01.854 21:30:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:01.854 21:30:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:01.854 21:30:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:01.854 21:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.854 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:25:02.115 nvme0n1 00:25:02.115 21:30:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.115 21:30:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.115 21:30:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.115 21:30:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:02.115 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:25:02.115 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.115 21:30:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.115 21:30:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.115 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.115 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:02.115 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.115 21:30:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:02.115 21:30:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:02.115 21:30:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:02.115 21:30:17 -- host/auth.sh@44 -- # digest=sha384 00:25:02.115 21:30:17 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.115 21:30:17 -- host/auth.sh@44 -- # keyid=2 00:25:02.115 21:30:17 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:02.115 21:30:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:02.115 21:30:17 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:02.115 21:30:17 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:02.115 21:30:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:25:02.115 21:30:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:02.115 21:30:17 -- host/auth.sh@68 -- # digest=sha384 00:25:02.115 21:30:17 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:02.115 21:30:17 -- host/auth.sh@68 -- # keyid=2 00:25:02.115 21:30:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:02.115 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.115 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:02.115 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.115 21:30:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:02.115 21:30:17 -- nvmf/common.sh@717 -- # local ip 00:25:02.115 21:30:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:02.115 21:30:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:02.116 21:30:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.116 21:30:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.116 21:30:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:02.116 21:30:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.116 21:30:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:02.116 21:30:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:02.116 21:30:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:02.116 21:30:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:02.116 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.116 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:02.376 nvme0n1 00:25:02.376 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.376 21:30:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.376 21:30:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:02.376 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.376 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:02.376 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.376 21:30:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.376 21:30:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.376 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.376 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:02.376 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.376 21:30:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:02.376 21:30:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:02.376 21:30:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:02.376 21:30:17 -- host/auth.sh@44 -- # digest=sha384 00:25:02.376 21:30:17 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.376 21:30:17 -- host/auth.sh@44 -- # keyid=3 00:25:02.376 21:30:17 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:02.376 21:30:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:02.376 21:30:17 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:02.376 21:30:17 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:02.376 21:30:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:25:02.376 21:30:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:02.376 21:30:17 -- host/auth.sh@68 -- # digest=sha384 00:25:02.376 21:30:17 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:02.376 21:30:17 -- host/auth.sh@68 -- # keyid=3 00:25:02.376 21:30:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:02.376 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.376 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:02.376 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.376 21:30:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:02.376 21:30:17 -- nvmf/common.sh@717 -- # local ip 00:25:02.376 21:30:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:02.376 21:30:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:02.376 21:30:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.376 21:30:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.376 21:30:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:02.376 21:30:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.376 21:30:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:02.376 21:30:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:02.376 21:30:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:02.376 21:30:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:02.376 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.376 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:02.637 nvme0n1 00:25:02.637 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.637 21:30:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.637 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.637 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:02.637 21:30:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:02.637 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.896 21:30:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.896 21:30:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.896 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.896 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:02.896 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.896 21:30:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:02.896 21:30:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:02.896 21:30:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:02.896 21:30:17 -- host/auth.sh@44 -- # digest=sha384 00:25:02.896 21:30:17 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.896 21:30:17 -- host/auth.sh@44 -- # keyid=4 00:25:02.896 21:30:17 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:02.896 21:30:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:02.896 21:30:17 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:02.896 21:30:17 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:02.896 21:30:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:25:02.896 21:30:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:02.896 21:30:17 -- host/auth.sh@68 -- # digest=sha384 00:25:02.896 21:30:17 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:02.896 21:30:17 -- host/auth.sh@68 -- # keyid=4 00:25:02.896 21:30:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:02.896 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.896 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:02.896 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.896 21:30:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:02.896 21:30:17 -- nvmf/common.sh@717 -- # local ip 00:25:02.896 21:30:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:02.896 21:30:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:02.896 21:30:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.896 21:30:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.896 21:30:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:02.896 21:30:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.896 21:30:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:02.896 21:30:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:02.896 21:30:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:02.896 21:30:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:02.896 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.896 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:03.154 nvme0n1 00:25:03.154 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.154 21:30:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.154 21:30:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:03.154 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.154 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:03.154 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.154 21:30:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.154 21:30:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.154 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.154 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:03.154 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.154 21:30:17 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.154 21:30:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:03.154 21:30:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:03.154 21:30:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:03.154 21:30:17 -- host/auth.sh@44 -- # digest=sha384 00:25:03.154 21:30:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.154 21:30:17 -- host/auth.sh@44 -- # keyid=0 00:25:03.154 21:30:17 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:03.154 21:30:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:03.154 21:30:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:03.154 21:30:17 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:03.154 21:30:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:25:03.154 21:30:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:03.154 21:30:17 -- host/auth.sh@68 -- # digest=sha384 00:25:03.154 21:30:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:03.154 21:30:17 -- host/auth.sh@68 -- # keyid=0 00:25:03.154 21:30:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:03.154 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.154 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:03.154 21:30:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.154 21:30:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:03.154 21:30:17 -- nvmf/common.sh@717 -- # local ip 00:25:03.154 21:30:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:03.154 21:30:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:03.154 21:30:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.154 21:30:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.154 21:30:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:03.154 21:30:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.154 21:30:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:03.154 21:30:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:03.154 21:30:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:03.154 21:30:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:03.154 21:30:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.154 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:25:03.413 nvme0n1 00:25:03.413 21:30:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.413 21:30:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.413 21:30:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:03.413 21:30:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.413 21:30:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.413 21:30:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.413 21:30:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.413 21:30:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.413 21:30:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.413 21:30:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.413 21:30:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.413 21:30:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:03.413 21:30:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:03.413 21:30:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:03.413 21:30:18 -- host/auth.sh@44 -- # digest=sha384 00:25:03.413 21:30:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.413 21:30:18 -- host/auth.sh@44 -- # keyid=1 00:25:03.413 21:30:18 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:03.413 21:30:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:03.413 21:30:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:03.413 21:30:18 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:03.413 21:30:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:25:03.413 21:30:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:03.413 21:30:18 -- host/auth.sh@68 -- # digest=sha384 00:25:03.413 21:30:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:03.413 21:30:18 -- host/auth.sh@68 -- # keyid=1 00:25:03.413 21:30:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:03.413 21:30:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.413 21:30:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.413 21:30:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.413 21:30:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:03.413 21:30:18 -- nvmf/common.sh@717 -- # local ip 00:25:03.413 21:30:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:03.413 21:30:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:03.413 21:30:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.413 21:30:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.413 21:30:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:03.413 21:30:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.413 21:30:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:03.413 21:30:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:03.413 21:30:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:03.413 21:30:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:03.413 21:30:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.413 21:30:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.980 nvme0n1 00:25:03.980 21:30:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.980 21:30:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.980 21:30:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.980 21:30:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.980 21:30:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:03.980 21:30:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.980 21:30:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.980 21:30:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.980 21:30:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.980 21:30:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.980 21:30:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.980 21:30:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:03.980 21:30:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:03.980 21:30:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:03.980 21:30:18 -- host/auth.sh@44 -- # digest=sha384 00:25:03.980 21:30:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.980 21:30:18 -- host/auth.sh@44 -- # keyid=2 00:25:03.980 21:30:18 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:03.980 21:30:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:03.980 21:30:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:03.980 21:30:18 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:03.980 21:30:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:25:03.980 21:30:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:03.980 21:30:18 -- host/auth.sh@68 -- # digest=sha384 00:25:03.980 21:30:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:03.980 21:30:18 -- host/auth.sh@68 -- # keyid=2 00:25:03.980 21:30:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:03.980 21:30:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.980 21:30:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.980 21:30:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.980 21:30:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:03.980 21:30:18 -- nvmf/common.sh@717 -- # local ip 00:25:03.980 21:30:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:03.980 21:30:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:03.980 21:30:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.981 21:30:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.981 21:30:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:03.981 21:30:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.981 21:30:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:03.981 21:30:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:03.981 21:30:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:03.981 21:30:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:03.981 21:30:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.981 21:30:18 -- common/autotest_common.sh@10 -- # set +x 00:25:04.241 nvme0n1 00:25:04.241 21:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.241 21:30:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:04.241 21:30:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.241 21:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.241 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:25:04.241 21:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.241 21:30:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.241 21:30:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.241 21:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.241 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:25:04.241 21:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.241 21:30:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:04.241 21:30:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:04.241 21:30:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:04.241 21:30:19 -- host/auth.sh@44 -- # digest=sha384 00:25:04.241 21:30:19 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.241 21:30:19 -- host/auth.sh@44 -- # keyid=3 00:25:04.241 21:30:19 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:04.241 21:30:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:04.241 21:30:19 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:04.241 21:30:19 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:04.241 21:30:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:25:04.241 21:30:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:04.241 21:30:19 -- host/auth.sh@68 -- # digest=sha384 00:25:04.241 21:30:19 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:04.241 21:30:19 -- host/auth.sh@68 -- # keyid=3 00:25:04.242 21:30:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:04.242 21:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.242 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:25:04.242 21:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.242 21:30:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:04.242 21:30:19 -- nvmf/common.sh@717 -- # local ip 00:25:04.242 21:30:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:04.242 21:30:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:04.242 21:30:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.242 21:30:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.242 21:30:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:04.242 21:30:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.242 21:30:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:04.242 21:30:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:04.242 21:30:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:04.242 21:30:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:04.242 21:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.242 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:25:04.812 nvme0n1 00:25:04.812 21:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.812 21:30:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.812 21:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.812 21:30:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:04.812 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:25:04.812 21:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.812 21:30:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.812 21:30:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.812 21:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.812 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:25:04.812 21:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.812 21:30:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:04.812 21:30:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:04.812 21:30:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:04.812 21:30:19 -- host/auth.sh@44 -- # digest=sha384 00:25:04.812 21:30:19 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.812 21:30:19 -- host/auth.sh@44 -- # keyid=4 00:25:04.812 21:30:19 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:04.812 21:30:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:04.812 21:30:19 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:04.812 21:30:19 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:04.812 21:30:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:25:04.812 21:30:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:04.812 21:30:19 -- host/auth.sh@68 -- # digest=sha384 00:25:04.812 21:30:19 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:04.812 21:30:19 -- host/auth.sh@68 -- # keyid=4 00:25:04.812 21:30:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:04.812 21:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.812 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:25:04.812 21:30:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.812 21:30:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:04.812 21:30:19 -- nvmf/common.sh@717 -- # local ip 00:25:04.812 21:30:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:04.812 21:30:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:04.812 21:30:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.812 21:30:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.812 21:30:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:04.812 21:30:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.812 21:30:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:04.812 21:30:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:04.812 21:30:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:04.812 21:30:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:04.812 21:30:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.812 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:25:05.071 nvme0n1 00:25:05.071 21:30:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.071 21:30:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:05.071 21:30:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.071 21:30:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.071 21:30:20 -- common/autotest_common.sh@10 -- # set +x 00:25:05.071 21:30:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.329 21:30:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.329 21:30:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.329 21:30:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.329 21:30:20 -- common/autotest_common.sh@10 -- # set +x 00:25:05.329 21:30:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.329 21:30:20 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.329 21:30:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:05.329 21:30:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:05.329 21:30:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:05.329 21:30:20 -- host/auth.sh@44 -- # digest=sha384 00:25:05.329 21:30:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.329 21:30:20 -- host/auth.sh@44 -- # keyid=0 00:25:05.329 21:30:20 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:05.329 21:30:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:05.329 21:30:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:05.329 21:30:20 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:05.329 21:30:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:25:05.329 21:30:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:05.329 21:30:20 -- host/auth.sh@68 -- # digest=sha384 00:25:05.330 21:30:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:05.330 21:30:20 -- host/auth.sh@68 -- # keyid=0 00:25:05.330 21:30:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:05.330 21:30:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.330 21:30:20 -- common/autotest_common.sh@10 -- # set +x 00:25:05.330 21:30:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.330 21:30:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:05.330 21:30:20 -- nvmf/common.sh@717 -- # local ip 00:25:05.330 21:30:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:05.330 21:30:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:05.330 21:30:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.330 21:30:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.330 21:30:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:05.330 21:30:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.330 21:30:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:05.330 21:30:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:05.330 21:30:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:05.330 21:30:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:05.330 21:30:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.330 21:30:20 -- common/autotest_common.sh@10 -- # set +x 00:25:05.897 nvme0n1 00:25:05.897 21:30:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.897 21:30:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.897 21:30:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:05.897 21:30:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.897 21:30:20 -- common/autotest_common.sh@10 -- # set +x 00:25:05.897 21:30:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.897 21:30:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.897 21:30:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.897 21:30:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.897 21:30:20 -- common/autotest_common.sh@10 -- # set +x 00:25:05.897 21:30:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.897 21:30:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:05.897 21:30:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:05.897 21:30:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:05.897 21:30:20 -- host/auth.sh@44 -- # digest=sha384 00:25:05.897 21:30:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.897 21:30:20 -- host/auth.sh@44 -- # keyid=1 00:25:05.897 21:30:20 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:05.897 21:30:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:05.897 21:30:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:05.897 21:30:20 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:05.897 21:30:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:25:05.897 21:30:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:05.897 21:30:20 -- host/auth.sh@68 -- # digest=sha384 00:25:05.897 21:30:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:05.897 21:30:20 -- host/auth.sh@68 -- # keyid=1 00:25:05.897 21:30:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:05.897 21:30:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.897 21:30:20 -- common/autotest_common.sh@10 -- # set +x 00:25:05.897 21:30:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:05.897 21:30:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:05.897 21:30:20 -- nvmf/common.sh@717 -- # local ip 00:25:05.897 21:30:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:05.897 21:30:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:05.897 21:30:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.897 21:30:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.897 21:30:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:05.897 21:30:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.897 21:30:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:05.897 21:30:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:05.897 21:30:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:05.897 21:30:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:05.897 21:30:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.897 21:30:20 -- common/autotest_common.sh@10 -- # set +x 00:25:06.467 nvme0n1 00:25:06.467 21:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.467 21:30:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.467 21:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.467 21:30:21 -- common/autotest_common.sh@10 -- # set +x 00:25:06.467 21:30:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:06.467 21:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.467 21:30:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.467 21:30:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.467 21:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.467 21:30:21 -- common/autotest_common.sh@10 -- # set +x 00:25:06.467 21:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.467 21:30:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:06.467 21:30:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:06.467 21:30:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:06.467 21:30:21 -- host/auth.sh@44 -- # digest=sha384 00:25:06.467 21:30:21 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.467 21:30:21 -- host/auth.sh@44 -- # keyid=2 00:25:06.467 21:30:21 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:06.467 21:30:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:06.467 21:30:21 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:06.467 21:30:21 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:06.467 21:30:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:25:06.467 21:30:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:06.467 21:30:21 -- host/auth.sh@68 -- # digest=sha384 00:25:06.467 21:30:21 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:06.467 21:30:21 -- host/auth.sh@68 -- # keyid=2 00:25:06.467 21:30:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:06.467 21:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.467 21:30:21 -- common/autotest_common.sh@10 -- # set +x 00:25:06.467 21:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.467 21:30:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:06.467 21:30:21 -- nvmf/common.sh@717 -- # local ip 00:25:06.467 21:30:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:06.467 21:30:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:06.467 21:30:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.467 21:30:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.467 21:30:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:06.467 21:30:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.467 21:30:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:06.467 21:30:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:06.467 21:30:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:06.467 21:30:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:06.467 21:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.467 21:30:21 -- common/autotest_common.sh@10 -- # set +x 00:25:07.034 nvme0n1 00:25:07.034 21:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.034 21:30:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.034 21:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.034 21:30:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:07.034 21:30:21 -- common/autotest_common.sh@10 -- # set +x 00:25:07.294 21:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.294 21:30:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.294 21:30:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.294 21:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.294 21:30:22 -- common/autotest_common.sh@10 -- # set +x 00:25:07.294 21:30:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.294 21:30:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:07.294 21:30:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:07.294 21:30:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:07.294 21:30:22 -- host/auth.sh@44 -- # digest=sha384 00:25:07.294 21:30:22 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.294 21:30:22 -- host/auth.sh@44 -- # keyid=3 00:25:07.294 21:30:22 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:07.294 21:30:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:07.294 21:30:22 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:07.294 21:30:22 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:07.294 21:30:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:25:07.294 21:30:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:07.294 21:30:22 -- host/auth.sh@68 -- # digest=sha384 00:25:07.294 21:30:22 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:07.294 21:30:22 -- host/auth.sh@68 -- # keyid=3 00:25:07.294 21:30:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:07.294 21:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.294 21:30:22 -- common/autotest_common.sh@10 -- # set +x 00:25:07.294 21:30:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.295 21:30:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:07.295 21:30:22 -- nvmf/common.sh@717 -- # local ip 00:25:07.295 21:30:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:07.295 21:30:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:07.295 21:30:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.295 21:30:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.295 21:30:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:07.295 21:30:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.295 21:30:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:07.295 21:30:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:07.295 21:30:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:07.295 21:30:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:07.295 21:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.295 21:30:22 -- common/autotest_common.sh@10 -- # set +x 00:25:07.866 nvme0n1 00:25:07.866 21:30:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.866 21:30:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.866 21:30:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:07.866 21:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.866 21:30:22 -- common/autotest_common.sh@10 -- # set +x 00:25:07.866 21:30:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.866 21:30:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.866 21:30:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.866 21:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.866 21:30:22 -- common/autotest_common.sh@10 -- # set +x 00:25:07.866 21:30:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.866 21:30:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:07.866 21:30:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:07.866 21:30:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:07.866 21:30:22 -- host/auth.sh@44 -- # digest=sha384 00:25:07.866 21:30:22 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.866 21:30:22 -- host/auth.sh@44 -- # keyid=4 00:25:07.866 21:30:22 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:07.866 21:30:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:07.866 21:30:22 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:07.866 21:30:22 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:07.866 21:30:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:25:07.866 21:30:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:07.866 21:30:22 -- host/auth.sh@68 -- # digest=sha384 00:25:07.866 21:30:22 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:07.866 21:30:22 -- host/auth.sh@68 -- # keyid=4 00:25:07.866 21:30:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:07.866 21:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.866 21:30:22 -- common/autotest_common.sh@10 -- # set +x 00:25:07.866 21:30:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.866 21:30:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:07.866 21:30:22 -- nvmf/common.sh@717 -- # local ip 00:25:07.866 21:30:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:07.866 21:30:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:07.866 21:30:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.866 21:30:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.866 21:30:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:07.866 21:30:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.866 21:30:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:07.866 21:30:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:07.866 21:30:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:07.866 21:30:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:07.866 21:30:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.866 21:30:22 -- common/autotest_common.sh@10 -- # set +x 00:25:08.436 nvme0n1 00:25:08.436 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.436 21:30:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.436 21:30:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:08.436 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.436 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.436 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.436 21:30:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.436 21:30:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.436 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.436 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.436 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.436 21:30:23 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:25:08.436 21:30:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:08.436 21:30:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:08.436 21:30:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:08.436 21:30:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:08.436 21:30:23 -- host/auth.sh@44 -- # digest=sha512 00:25:08.436 21:30:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:08.436 21:30:23 -- host/auth.sh@44 -- # keyid=0 00:25:08.436 21:30:23 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:08.436 21:30:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:08.436 21:30:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:08.436 21:30:23 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:08.436 21:30:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:25:08.436 21:30:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:08.436 21:30:23 -- host/auth.sh@68 -- # digest=sha512 00:25:08.437 21:30:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:08.437 21:30:23 -- host/auth.sh@68 -- # keyid=0 00:25:08.437 21:30:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:08.437 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.437 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.437 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.437 21:30:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:08.437 21:30:23 -- nvmf/common.sh@717 -- # local ip 00:25:08.437 21:30:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:08.437 21:30:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:08.437 21:30:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.437 21:30:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.437 21:30:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:08.437 21:30:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.437 21:30:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:08.437 21:30:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:08.437 21:30:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:08.437 21:30:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:08.437 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.437 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.697 nvme0n1 00:25:08.697 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.697 21:30:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.697 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.697 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.697 21:30:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:08.697 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.697 21:30:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.697 21:30:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.697 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.697 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.697 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.697 21:30:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:08.697 21:30:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:08.697 21:30:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:08.697 21:30:23 -- host/auth.sh@44 -- # digest=sha512 00:25:08.697 21:30:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:08.697 21:30:23 -- host/auth.sh@44 -- # keyid=1 00:25:08.697 21:30:23 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:08.697 21:30:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:08.697 21:30:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:08.697 21:30:23 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:08.698 21:30:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:25:08.698 21:30:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:08.698 21:30:23 -- host/auth.sh@68 -- # digest=sha512 00:25:08.698 21:30:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:08.698 21:30:23 -- host/auth.sh@68 -- # keyid=1 00:25:08.698 21:30:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:08.698 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.698 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.698 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.698 21:30:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:08.698 21:30:23 -- nvmf/common.sh@717 -- # local ip 00:25:08.698 21:30:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:08.698 21:30:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:08.698 21:30:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.698 21:30:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.698 21:30:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:08.698 21:30:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.698 21:30:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:08.698 21:30:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:08.698 21:30:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:08.698 21:30:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:08.698 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.698 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.958 nvme0n1 00:25:08.958 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.958 21:30:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.958 21:30:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:08.958 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.958 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.958 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.958 21:30:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.958 21:30:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.958 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.958 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.958 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.958 21:30:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:08.958 21:30:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:08.958 21:30:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:08.958 21:30:23 -- host/auth.sh@44 -- # digest=sha512 00:25:08.958 21:30:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:08.958 21:30:23 -- host/auth.sh@44 -- # keyid=2 00:25:08.958 21:30:23 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:08.958 21:30:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:08.958 21:30:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:08.958 21:30:23 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:08.958 21:30:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:25:08.958 21:30:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:08.958 21:30:23 -- host/auth.sh@68 -- # digest=sha512 00:25:08.958 21:30:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:08.958 21:30:23 -- host/auth.sh@68 -- # keyid=2 00:25:08.958 21:30:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:08.958 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.958 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.958 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.958 21:30:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:08.958 21:30:23 -- nvmf/common.sh@717 -- # local ip 00:25:08.958 21:30:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:08.958 21:30:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:08.958 21:30:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.958 21:30:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.958 21:30:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:08.958 21:30:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.958 21:30:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:08.958 21:30:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:08.958 21:30:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:08.958 21:30:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:08.958 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.958 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.958 nvme0n1 00:25:08.958 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.958 21:30:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.958 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.958 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.958 21:30:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:08.958 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.958 21:30:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.958 21:30:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.958 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.958 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:09.219 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.219 21:30:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:09.219 21:30:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:09.219 21:30:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:09.219 21:30:23 -- host/auth.sh@44 -- # digest=sha512 00:25:09.219 21:30:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.219 21:30:23 -- host/auth.sh@44 -- # keyid=3 00:25:09.219 21:30:23 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:09.219 21:30:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:09.219 21:30:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:09.219 21:30:23 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:09.219 21:30:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:25:09.219 21:30:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:09.219 21:30:23 -- host/auth.sh@68 -- # digest=sha512 00:25:09.219 21:30:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:09.219 21:30:23 -- host/auth.sh@68 -- # keyid=3 00:25:09.219 21:30:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:09.219 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.219 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:09.219 21:30:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.219 21:30:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:09.219 21:30:23 -- nvmf/common.sh@717 -- # local ip 00:25:09.219 21:30:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:09.219 21:30:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:09.219 21:30:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.219 21:30:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.219 21:30:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:09.219 21:30:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.219 21:30:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:09.219 21:30:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:09.219 21:30:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:09.219 21:30:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:09.219 21:30:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.219 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:25:09.219 nvme0n1 00:25:09.219 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.219 21:30:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.219 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.219 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.219 21:30:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:09.219 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.219 21:30:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.219 21:30:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.219 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.219 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.219 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.219 21:30:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:09.219 21:30:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:09.219 21:30:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:09.219 21:30:24 -- host/auth.sh@44 -- # digest=sha512 00:25:09.219 21:30:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.219 21:30:24 -- host/auth.sh@44 -- # keyid=4 00:25:09.219 21:30:24 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:09.219 21:30:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:09.219 21:30:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:09.220 21:30:24 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:09.220 21:30:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:25:09.220 21:30:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:09.220 21:30:24 -- host/auth.sh@68 -- # digest=sha512 00:25:09.220 21:30:24 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:09.220 21:30:24 -- host/auth.sh@68 -- # keyid=4 00:25:09.220 21:30:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:09.220 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.220 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.220 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.220 21:30:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:09.220 21:30:24 -- nvmf/common.sh@717 -- # local ip 00:25:09.220 21:30:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:09.220 21:30:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:09.220 21:30:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.220 21:30:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.220 21:30:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:09.220 21:30:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.220 21:30:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:09.220 21:30:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:09.220 21:30:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:09.220 21:30:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.220 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.220 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.481 nvme0n1 00:25:09.481 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.481 21:30:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.481 21:30:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:09.481 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.481 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.481 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.481 21:30:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.481 21:30:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.481 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.481 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.481 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.481 21:30:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:09.481 21:30:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:09.481 21:30:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:09.481 21:30:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:09.481 21:30:24 -- host/auth.sh@44 -- # digest=sha512 00:25:09.481 21:30:24 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:09.481 21:30:24 -- host/auth.sh@44 -- # keyid=0 00:25:09.481 21:30:24 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:09.481 21:30:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:09.481 21:30:24 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:09.481 21:30:24 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:09.481 21:30:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:25:09.481 21:30:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:09.481 21:30:24 -- host/auth.sh@68 -- # digest=sha512 00:25:09.481 21:30:24 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:09.481 21:30:24 -- host/auth.sh@68 -- # keyid=0 00:25:09.481 21:30:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:09.481 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.481 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.481 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.481 21:30:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:09.481 21:30:24 -- nvmf/common.sh@717 -- # local ip 00:25:09.481 21:30:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:09.481 21:30:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:09.481 21:30:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.481 21:30:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.481 21:30:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:09.481 21:30:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.481 21:30:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:09.481 21:30:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:09.481 21:30:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:09.481 21:30:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:09.481 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.481 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.743 nvme0n1 00:25:09.743 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.743 21:30:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.743 21:30:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:09.743 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.743 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.743 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.743 21:30:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.743 21:30:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.743 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.743 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.743 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.743 21:30:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:09.743 21:30:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:09.743 21:30:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:09.743 21:30:24 -- host/auth.sh@44 -- # digest=sha512 00:25:09.743 21:30:24 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:09.743 21:30:24 -- host/auth.sh@44 -- # keyid=1 00:25:09.743 21:30:24 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:09.743 21:30:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:09.743 21:30:24 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:09.743 21:30:24 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:09.743 21:30:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:25:09.743 21:30:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:09.743 21:30:24 -- host/auth.sh@68 -- # digest=sha512 00:25:09.743 21:30:24 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:09.743 21:30:24 -- host/auth.sh@68 -- # keyid=1 00:25:09.743 21:30:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:09.743 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.743 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.743 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.743 21:30:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:09.743 21:30:24 -- nvmf/common.sh@717 -- # local ip 00:25:09.743 21:30:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:09.743 21:30:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:09.743 21:30:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.743 21:30:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.743 21:30:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:09.743 21:30:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.743 21:30:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:09.743 21:30:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:09.743 21:30:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:09.743 21:30:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:09.743 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.743 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:10.004 nvme0n1 00:25:10.004 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.004 21:30:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.004 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.004 21:30:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:10.004 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:10.004 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.004 21:30:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.004 21:30:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.004 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.004 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:10.004 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.004 21:30:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:10.004 21:30:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:10.004 21:30:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:10.004 21:30:24 -- host/auth.sh@44 -- # digest=sha512 00:25:10.004 21:30:24 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.004 21:30:24 -- host/auth.sh@44 -- # keyid=2 00:25:10.004 21:30:24 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:10.004 21:30:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:10.004 21:30:24 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:10.004 21:30:24 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:10.004 21:30:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:25:10.004 21:30:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:10.004 21:30:24 -- host/auth.sh@68 -- # digest=sha512 00:25:10.004 21:30:24 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:10.004 21:30:24 -- host/auth.sh@68 -- # keyid=2 00:25:10.004 21:30:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:10.004 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.004 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:10.004 21:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.004 21:30:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:10.004 21:30:24 -- nvmf/common.sh@717 -- # local ip 00:25:10.004 21:30:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:10.004 21:30:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:10.004 21:30:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.004 21:30:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.004 21:30:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:10.004 21:30:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.004 21:30:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:10.004 21:30:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:10.004 21:30:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:10.004 21:30:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:10.004 21:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.004 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:25:10.264 nvme0n1 00:25:10.264 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.264 21:30:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:10.264 21:30:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.264 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.264 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:10.264 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.264 21:30:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.264 21:30:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.264 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.264 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:10.264 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.264 21:30:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:10.264 21:30:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:10.264 21:30:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:10.264 21:30:25 -- host/auth.sh@44 -- # digest=sha512 00:25:10.264 21:30:25 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.264 21:30:25 -- host/auth.sh@44 -- # keyid=3 00:25:10.264 21:30:25 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:10.264 21:30:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:10.264 21:30:25 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:10.264 21:30:25 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:10.264 21:30:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:25:10.264 21:30:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:10.264 21:30:25 -- host/auth.sh@68 -- # digest=sha512 00:25:10.264 21:30:25 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:10.264 21:30:25 -- host/auth.sh@68 -- # keyid=3 00:25:10.264 21:30:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:10.264 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.264 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:10.264 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.264 21:30:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:10.264 21:30:25 -- nvmf/common.sh@717 -- # local ip 00:25:10.264 21:30:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:10.264 21:30:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:10.264 21:30:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.264 21:30:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.264 21:30:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:10.264 21:30:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.264 21:30:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:10.264 21:30:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:10.264 21:30:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:10.264 21:30:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:10.264 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.264 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:10.524 nvme0n1 00:25:10.524 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.524 21:30:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.524 21:30:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:10.524 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.524 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:10.524 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.525 21:30:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.525 21:30:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.525 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.525 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:10.525 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.525 21:30:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:10.525 21:30:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:10.525 21:30:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:10.525 21:30:25 -- host/auth.sh@44 -- # digest=sha512 00:25:10.525 21:30:25 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:10.525 21:30:25 -- host/auth.sh@44 -- # keyid=4 00:25:10.525 21:30:25 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:10.525 21:30:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:10.525 21:30:25 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:10.525 21:30:25 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:10.525 21:30:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:25:10.525 21:30:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:10.525 21:30:25 -- host/auth.sh@68 -- # digest=sha512 00:25:10.525 21:30:25 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:10.525 21:30:25 -- host/auth.sh@68 -- # keyid=4 00:25:10.525 21:30:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:10.525 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.525 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:10.525 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.525 21:30:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:10.525 21:30:25 -- nvmf/common.sh@717 -- # local ip 00:25:10.525 21:30:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:10.525 21:30:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:10.525 21:30:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.525 21:30:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.525 21:30:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:10.525 21:30:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.525 21:30:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:10.525 21:30:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:10.525 21:30:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:10.525 21:30:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:10.525 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.525 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:10.785 nvme0n1 00:25:10.785 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.785 21:30:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:10.785 21:30:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.785 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.785 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:10.785 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.785 21:30:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.785 21:30:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.785 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.785 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:10.785 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.785 21:30:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.785 21:30:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:10.785 21:30:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:10.785 21:30:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:10.785 21:30:25 -- host/auth.sh@44 -- # digest=sha512 00:25:10.785 21:30:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:10.785 21:30:25 -- host/auth.sh@44 -- # keyid=0 00:25:10.785 21:30:25 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:10.785 21:30:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:10.785 21:30:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:10.785 21:30:25 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:10.785 21:30:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:25:10.786 21:30:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:10.786 21:30:25 -- host/auth.sh@68 -- # digest=sha512 00:25:10.786 21:30:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:10.786 21:30:25 -- host/auth.sh@68 -- # keyid=0 00:25:10.786 21:30:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:10.786 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.786 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:10.786 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.786 21:30:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:10.786 21:30:25 -- nvmf/common.sh@717 -- # local ip 00:25:10.786 21:30:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:10.786 21:30:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:10.786 21:30:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.786 21:30:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.786 21:30:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:10.786 21:30:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.786 21:30:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:10.786 21:30:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:10.786 21:30:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:10.786 21:30:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:10.786 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.786 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:11.047 nvme0n1 00:25:11.047 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.047 21:30:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.047 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.047 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:11.047 21:30:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:11.047 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.047 21:30:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.047 21:30:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.047 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.047 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:11.047 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.047 21:30:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:11.047 21:30:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:11.047 21:30:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:11.047 21:30:25 -- host/auth.sh@44 -- # digest=sha512 00:25:11.047 21:30:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:11.047 21:30:25 -- host/auth.sh@44 -- # keyid=1 00:25:11.047 21:30:25 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:11.047 21:30:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:11.047 21:30:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:11.047 21:30:25 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:11.047 21:30:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:25:11.047 21:30:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:11.047 21:30:25 -- host/auth.sh@68 -- # digest=sha512 00:25:11.047 21:30:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:11.047 21:30:25 -- host/auth.sh@68 -- # keyid=1 00:25:11.047 21:30:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:11.047 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.047 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:11.047 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.047 21:30:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:11.047 21:30:25 -- nvmf/common.sh@717 -- # local ip 00:25:11.047 21:30:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:11.047 21:30:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:11.047 21:30:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.047 21:30:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.047 21:30:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:11.047 21:30:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.047 21:30:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:11.047 21:30:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:11.047 21:30:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:11.047 21:30:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:11.047 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.047 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:25:11.308 nvme0n1 00:25:11.308 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.308 21:30:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.308 21:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.308 21:30:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:11.308 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:25:11.308 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.308 21:30:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.308 21:30:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.308 21:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.308 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:25:11.308 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.308 21:30:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:11.308 21:30:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:11.308 21:30:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:11.308 21:30:26 -- host/auth.sh@44 -- # digest=sha512 00:25:11.308 21:30:26 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:11.308 21:30:26 -- host/auth.sh@44 -- # keyid=2 00:25:11.308 21:30:26 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:11.308 21:30:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:11.308 21:30:26 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:11.308 21:30:26 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:11.308 21:30:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:25:11.308 21:30:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:11.308 21:30:26 -- host/auth.sh@68 -- # digest=sha512 00:25:11.308 21:30:26 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:11.308 21:30:26 -- host/auth.sh@68 -- # keyid=2 00:25:11.308 21:30:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:11.308 21:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.308 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:25:11.308 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.308 21:30:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:11.308 21:30:26 -- nvmf/common.sh@717 -- # local ip 00:25:11.308 21:30:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:11.308 21:30:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:11.308 21:30:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.308 21:30:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.308 21:30:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:11.308 21:30:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.308 21:30:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:11.308 21:30:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:11.308 21:30:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:11.308 21:30:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:11.308 21:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.308 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:25:11.567 nvme0n1 00:25:11.567 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.567 21:30:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.567 21:30:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:11.567 21:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.567 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:25:11.567 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.567 21:30:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.567 21:30:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.567 21:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.567 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:25:11.567 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.567 21:30:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:11.567 21:30:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:11.567 21:30:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:11.567 21:30:26 -- host/auth.sh@44 -- # digest=sha512 00:25:11.567 21:30:26 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:11.567 21:30:26 -- host/auth.sh@44 -- # keyid=3 00:25:11.567 21:30:26 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:11.567 21:30:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:11.567 21:30:26 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:11.567 21:30:26 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:11.567 21:30:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:25:11.567 21:30:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:11.567 21:30:26 -- host/auth.sh@68 -- # digest=sha512 00:25:11.567 21:30:26 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:11.567 21:30:26 -- host/auth.sh@68 -- # keyid=3 00:25:11.567 21:30:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:11.567 21:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.567 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:25:11.567 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.567 21:30:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:11.567 21:30:26 -- nvmf/common.sh@717 -- # local ip 00:25:11.567 21:30:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:11.567 21:30:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:11.567 21:30:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.567 21:30:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.567 21:30:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:11.567 21:30:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.567 21:30:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:11.567 21:30:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:11.567 21:30:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:11.567 21:30:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:11.567 21:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.567 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:25:11.825 nvme0n1 00:25:11.825 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.825 21:30:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.825 21:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.825 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:25:11.825 21:30:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:11.825 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.825 21:30:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.825 21:30:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.825 21:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.825 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:25:11.825 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.825 21:30:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:11.826 21:30:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:11.826 21:30:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:11.826 21:30:26 -- host/auth.sh@44 -- # digest=sha512 00:25:11.826 21:30:26 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:11.826 21:30:26 -- host/auth.sh@44 -- # keyid=4 00:25:11.826 21:30:26 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:11.826 21:30:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:11.826 21:30:26 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:11.826 21:30:26 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:11.826 21:30:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:25:11.826 21:30:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:11.826 21:30:26 -- host/auth.sh@68 -- # digest=sha512 00:25:11.826 21:30:26 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:11.826 21:30:26 -- host/auth.sh@68 -- # keyid=4 00:25:11.826 21:30:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:11.826 21:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.826 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:25:11.826 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.826 21:30:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:11.826 21:30:26 -- nvmf/common.sh@717 -- # local ip 00:25:11.826 21:30:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:11.826 21:30:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:11.826 21:30:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.826 21:30:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.826 21:30:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:11.826 21:30:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.826 21:30:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:11.826 21:30:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:11.826 21:30:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:11.826 21:30:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.826 21:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.826 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:25:12.084 nvme0n1 00:25:12.084 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.084 21:30:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.084 21:30:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:12.084 21:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.084 21:30:26 -- common/autotest_common.sh@10 -- # set +x 00:25:12.084 21:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.084 21:30:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.084 21:30:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.084 21:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.084 21:30:27 -- common/autotest_common.sh@10 -- # set +x 00:25:12.084 21:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.084 21:30:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.084 21:30:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:12.084 21:30:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:12.084 21:30:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:12.084 21:30:27 -- host/auth.sh@44 -- # digest=sha512 00:25:12.084 21:30:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:12.084 21:30:27 -- host/auth.sh@44 -- # keyid=0 00:25:12.084 21:30:27 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:12.084 21:30:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:12.084 21:30:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:12.084 21:30:27 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:12.085 21:30:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:25:12.085 21:30:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:12.085 21:30:27 -- host/auth.sh@68 -- # digest=sha512 00:25:12.085 21:30:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:12.085 21:30:27 -- host/auth.sh@68 -- # keyid=0 00:25:12.085 21:30:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:12.085 21:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.085 21:30:27 -- common/autotest_common.sh@10 -- # set +x 00:25:12.085 21:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.085 21:30:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:12.085 21:30:27 -- nvmf/common.sh@717 -- # local ip 00:25:12.085 21:30:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:12.085 21:30:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:12.085 21:30:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.085 21:30:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.085 21:30:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:12.085 21:30:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.085 21:30:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:12.085 21:30:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:12.085 21:30:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:12.085 21:30:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:12.085 21:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.085 21:30:27 -- common/autotest_common.sh@10 -- # set +x 00:25:12.654 nvme0n1 00:25:12.654 21:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.654 21:30:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.654 21:30:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:12.654 21:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.654 21:30:27 -- common/autotest_common.sh@10 -- # set +x 00:25:12.654 21:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.654 21:30:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.654 21:30:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.654 21:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.654 21:30:27 -- common/autotest_common.sh@10 -- # set +x 00:25:12.654 21:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.654 21:30:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:12.654 21:30:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:12.654 21:30:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:12.654 21:30:27 -- host/auth.sh@44 -- # digest=sha512 00:25:12.654 21:30:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:12.654 21:30:27 -- host/auth.sh@44 -- # keyid=1 00:25:12.654 21:30:27 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:12.654 21:30:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:12.654 21:30:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:12.654 21:30:27 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:12.654 21:30:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:25:12.654 21:30:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:12.654 21:30:27 -- host/auth.sh@68 -- # digest=sha512 00:25:12.654 21:30:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:12.654 21:30:27 -- host/auth.sh@68 -- # keyid=1 00:25:12.654 21:30:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:12.654 21:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.654 21:30:27 -- common/autotest_common.sh@10 -- # set +x 00:25:12.654 21:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.654 21:30:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:12.654 21:30:27 -- nvmf/common.sh@717 -- # local ip 00:25:12.654 21:30:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:12.654 21:30:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:12.654 21:30:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.654 21:30:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.654 21:30:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:12.654 21:30:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.654 21:30:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:12.654 21:30:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:12.654 21:30:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:12.654 21:30:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:12.654 21:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.654 21:30:27 -- common/autotest_common.sh@10 -- # set +x 00:25:12.915 nvme0n1 00:25:12.915 21:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.915 21:30:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.915 21:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.915 21:30:27 -- common/autotest_common.sh@10 -- # set +x 00:25:12.915 21:30:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:12.915 21:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.915 21:30:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.915 21:30:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.915 21:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.915 21:30:27 -- common/autotest_common.sh@10 -- # set +x 00:25:13.175 21:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.175 21:30:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:13.175 21:30:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:13.175 21:30:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:13.175 21:30:27 -- host/auth.sh@44 -- # digest=sha512 00:25:13.175 21:30:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.175 21:30:27 -- host/auth.sh@44 -- # keyid=2 00:25:13.175 21:30:27 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:13.175 21:30:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:13.175 21:30:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:13.175 21:30:27 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:13.175 21:30:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:25:13.175 21:30:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:13.175 21:30:27 -- host/auth.sh@68 -- # digest=sha512 00:25:13.175 21:30:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:13.175 21:30:27 -- host/auth.sh@68 -- # keyid=2 00:25:13.175 21:30:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:13.175 21:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.175 21:30:27 -- common/autotest_common.sh@10 -- # set +x 00:25:13.175 21:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.175 21:30:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:13.175 21:30:27 -- nvmf/common.sh@717 -- # local ip 00:25:13.175 21:30:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:13.175 21:30:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:13.175 21:30:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.175 21:30:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.175 21:30:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:13.175 21:30:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.175 21:30:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:13.175 21:30:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:13.175 21:30:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:13.175 21:30:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:13.175 21:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.175 21:30:27 -- common/autotest_common.sh@10 -- # set +x 00:25:13.434 nvme0n1 00:25:13.434 21:30:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.434 21:30:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.435 21:30:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.435 21:30:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:13.435 21:30:28 -- common/autotest_common.sh@10 -- # set +x 00:25:13.435 21:30:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.435 21:30:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.435 21:30:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.435 21:30:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.435 21:30:28 -- common/autotest_common.sh@10 -- # set +x 00:25:13.435 21:30:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.435 21:30:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:13.435 21:30:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:13.435 21:30:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:13.435 21:30:28 -- host/auth.sh@44 -- # digest=sha512 00:25:13.435 21:30:28 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.435 21:30:28 -- host/auth.sh@44 -- # keyid=3 00:25:13.435 21:30:28 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:13.435 21:30:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:13.435 21:30:28 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:13.435 21:30:28 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:13.435 21:30:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:25:13.435 21:30:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:13.435 21:30:28 -- host/auth.sh@68 -- # digest=sha512 00:25:13.435 21:30:28 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:13.435 21:30:28 -- host/auth.sh@68 -- # keyid=3 00:25:13.435 21:30:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:13.435 21:30:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.435 21:30:28 -- common/autotest_common.sh@10 -- # set +x 00:25:13.435 21:30:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.435 21:30:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:13.435 21:30:28 -- nvmf/common.sh@717 -- # local ip 00:25:13.435 21:30:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:13.435 21:30:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:13.435 21:30:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.435 21:30:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.435 21:30:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:13.435 21:30:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.435 21:30:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:13.435 21:30:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:13.435 21:30:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:13.435 21:30:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:13.435 21:30:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.435 21:30:28 -- common/autotest_common.sh@10 -- # set +x 00:25:14.002 nvme0n1 00:25:14.002 21:30:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.002 21:30:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.002 21:30:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:14.002 21:30:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.002 21:30:28 -- common/autotest_common.sh@10 -- # set +x 00:25:14.002 21:30:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.002 21:30:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.002 21:30:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.002 21:30:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.002 21:30:28 -- common/autotest_common.sh@10 -- # set +x 00:25:14.002 21:30:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.002 21:30:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:14.002 21:30:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:14.002 21:30:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:14.002 21:30:28 -- host/auth.sh@44 -- # digest=sha512 00:25:14.002 21:30:28 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.002 21:30:28 -- host/auth.sh@44 -- # keyid=4 00:25:14.002 21:30:28 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:14.002 21:30:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:14.002 21:30:28 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:14.002 21:30:28 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:14.002 21:30:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:25:14.002 21:30:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:14.002 21:30:28 -- host/auth.sh@68 -- # digest=sha512 00:25:14.002 21:30:28 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:14.002 21:30:28 -- host/auth.sh@68 -- # keyid=4 00:25:14.002 21:30:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:14.002 21:30:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.002 21:30:28 -- common/autotest_common.sh@10 -- # set +x 00:25:14.002 21:30:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.002 21:30:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:14.002 21:30:28 -- nvmf/common.sh@717 -- # local ip 00:25:14.002 21:30:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:14.002 21:30:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:14.002 21:30:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.002 21:30:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.002 21:30:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:14.002 21:30:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.002 21:30:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:14.002 21:30:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:14.002 21:30:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:14.002 21:30:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.002 21:30:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.002 21:30:28 -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 nvme0n1 00:25:14.261 21:30:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.261 21:30:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.261 21:30:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.261 21:30:29 -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 21:30:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:14.261 21:30:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.261 21:30:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.261 21:30:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.261 21:30:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.261 21:30:29 -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 21:30:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.261 21:30:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.261 21:30:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:14.261 21:30:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:14.261 21:30:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:14.261 21:30:29 -- host/auth.sh@44 -- # digest=sha512 00:25:14.261 21:30:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:14.261 21:30:29 -- host/auth.sh@44 -- # keyid=0 00:25:14.261 21:30:29 -- host/auth.sh@45 -- # key=DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:14.261 21:30:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:14.261 21:30:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:14.261 21:30:29 -- host/auth.sh@49 -- # echo DHHC-1:00:ODNhYzkzZmNkYTc5ODVhZmMzN2RhMTdlNjJhMTQxMzARnuzC: 00:25:14.261 21:30:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:25:14.261 21:30:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:14.261 21:30:29 -- host/auth.sh@68 -- # digest=sha512 00:25:14.261 21:30:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:14.261 21:30:29 -- host/auth.sh@68 -- # keyid=0 00:25:14.261 21:30:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:14.261 21:30:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.261 21:30:29 -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 21:30:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.261 21:30:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:14.261 21:30:29 -- nvmf/common.sh@717 -- # local ip 00:25:14.261 21:30:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:14.261 21:30:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:14.261 21:30:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.261 21:30:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.261 21:30:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:14.261 21:30:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.261 21:30:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:14.261 21:30:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:14.261 21:30:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:14.261 21:30:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:14.261 21:30:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.261 21:30:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.199 nvme0n1 00:25:15.199 21:30:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.199 21:30:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.199 21:30:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:15.199 21:30:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.199 21:30:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.199 21:30:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.199 21:30:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.199 21:30:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.199 21:30:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.199 21:30:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.199 21:30:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.199 21:30:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:15.199 21:30:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:15.199 21:30:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:15.199 21:30:29 -- host/auth.sh@44 -- # digest=sha512 00:25:15.199 21:30:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:15.199 21:30:29 -- host/auth.sh@44 -- # keyid=1 00:25:15.199 21:30:29 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:15.199 21:30:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:15.199 21:30:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:15.199 21:30:29 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:15.199 21:30:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:25:15.199 21:30:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:15.199 21:30:29 -- host/auth.sh@68 -- # digest=sha512 00:25:15.199 21:30:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:15.199 21:30:29 -- host/auth.sh@68 -- # keyid=1 00:25:15.199 21:30:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:15.199 21:30:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.199 21:30:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.199 21:30:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.199 21:30:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:15.199 21:30:29 -- nvmf/common.sh@717 -- # local ip 00:25:15.199 21:30:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:15.199 21:30:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:15.199 21:30:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.199 21:30:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.199 21:30:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:15.199 21:30:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.199 21:30:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:15.199 21:30:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:15.199 21:30:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:15.199 21:30:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:15.199 21:30:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.199 21:30:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.767 nvme0n1 00:25:15.767 21:30:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.767 21:30:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.767 21:30:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.767 21:30:30 -- common/autotest_common.sh@10 -- # set +x 00:25:15.767 21:30:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:15.767 21:30:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.767 21:30:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.767 21:30:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.767 21:30:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.767 21:30:30 -- common/autotest_common.sh@10 -- # set +x 00:25:15.767 21:30:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.767 21:30:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:15.767 21:30:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:15.767 21:30:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:15.767 21:30:30 -- host/auth.sh@44 -- # digest=sha512 00:25:15.767 21:30:30 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:15.767 21:30:30 -- host/auth.sh@44 -- # keyid=2 00:25:15.767 21:30:30 -- host/auth.sh@45 -- # key=DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:15.767 21:30:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:15.767 21:30:30 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:15.767 21:30:30 -- host/auth.sh@49 -- # echo DHHC-1:01:NWUzM2Y2NDQ4NDc5MWYwMDViZjcwNWViNmQ2MjdmNzbCQTLC: 00:25:15.767 21:30:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:25:15.767 21:30:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:15.767 21:30:30 -- host/auth.sh@68 -- # digest=sha512 00:25:15.767 21:30:30 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:15.767 21:30:30 -- host/auth.sh@68 -- # keyid=2 00:25:15.767 21:30:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:15.767 21:30:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.767 21:30:30 -- common/autotest_common.sh@10 -- # set +x 00:25:15.767 21:30:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.767 21:30:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:15.767 21:30:30 -- nvmf/common.sh@717 -- # local ip 00:25:15.767 21:30:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:15.767 21:30:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:15.767 21:30:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.767 21:30:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.767 21:30:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:15.767 21:30:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.767 21:30:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:15.767 21:30:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:15.767 21:30:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:15.767 21:30:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:15.767 21:30:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.767 21:30:30 -- common/autotest_common.sh@10 -- # set +x 00:25:16.333 nvme0n1 00:25:16.333 21:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.333 21:30:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.333 21:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.333 21:30:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:16.333 21:30:31 -- common/autotest_common.sh@10 -- # set +x 00:25:16.333 21:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.333 21:30:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.333 21:30:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.333 21:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.333 21:30:31 -- common/autotest_common.sh@10 -- # set +x 00:25:16.333 21:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.333 21:30:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:16.333 21:30:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:16.333 21:30:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:16.333 21:30:31 -- host/auth.sh@44 -- # digest=sha512 00:25:16.333 21:30:31 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.333 21:30:31 -- host/auth.sh@44 -- # keyid=3 00:25:16.333 21:30:31 -- host/auth.sh@45 -- # key=DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:16.333 21:30:31 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:16.333 21:30:31 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:16.333 21:30:31 -- host/auth.sh@49 -- # echo DHHC-1:02:MTFjYmYzMDg0NTg2MzRhMzY0ODBlNzE1YWZkNmUzZWJmMzIyZWVkNjE0MDkzNWQz3FOfOQ==: 00:25:16.333 21:30:31 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:25:16.333 21:30:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:16.333 21:30:31 -- host/auth.sh@68 -- # digest=sha512 00:25:16.333 21:30:31 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:16.333 21:30:31 -- host/auth.sh@68 -- # keyid=3 00:25:16.334 21:30:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:16.334 21:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.334 21:30:31 -- common/autotest_common.sh@10 -- # set +x 00:25:16.334 21:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.334 21:30:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:16.334 21:30:31 -- nvmf/common.sh@717 -- # local ip 00:25:16.334 21:30:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:16.334 21:30:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:16.334 21:30:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.334 21:30:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.334 21:30:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:16.334 21:30:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.334 21:30:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:16.334 21:30:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:16.334 21:30:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:16.334 21:30:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:16.334 21:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.334 21:30:31 -- common/autotest_common.sh@10 -- # set +x 00:25:16.904 nvme0n1 00:25:16.904 21:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.904 21:30:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.904 21:30:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:16.904 21:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.904 21:30:31 -- common/autotest_common.sh@10 -- # set +x 00:25:16.904 21:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.904 21:30:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.904 21:30:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.904 21:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.904 21:30:31 -- common/autotest_common.sh@10 -- # set +x 00:25:16.904 21:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.904 21:30:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:16.904 21:30:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:16.904 21:30:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:16.904 21:30:31 -- host/auth.sh@44 -- # digest=sha512 00:25:16.904 21:30:31 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.904 21:30:31 -- host/auth.sh@44 -- # keyid=4 00:25:16.904 21:30:31 -- host/auth.sh@45 -- # key=DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:16.904 21:30:31 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:16.904 21:30:31 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:16.904 21:30:31 -- host/auth.sh@49 -- # echo DHHC-1:03:MzkyOTMyMzY1MTBiOWVjYjI0MDA3NzYwZTM2YTlkNTJmM2M2ZDI0NGMwZjc5M2Q4ZGE1Zjc5MWQ5ZWQwYjA0NAuu6f8=: 00:25:16.904 21:30:31 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:25:16.904 21:30:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:16.904 21:30:31 -- host/auth.sh@68 -- # digest=sha512 00:25:16.905 21:30:31 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:16.905 21:30:31 -- host/auth.sh@68 -- # keyid=4 00:25:16.905 21:30:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:16.905 21:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.905 21:30:31 -- common/autotest_common.sh@10 -- # set +x 00:25:16.905 21:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.905 21:30:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:16.905 21:30:31 -- nvmf/common.sh@717 -- # local ip 00:25:16.905 21:30:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:16.905 21:30:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:16.905 21:30:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.905 21:30:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.905 21:30:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:16.905 21:30:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.905 21:30:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:16.905 21:30:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:16.905 21:30:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:16.905 21:30:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.905 21:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.905 21:30:31 -- common/autotest_common.sh@10 -- # set +x 00:25:17.844 nvme0n1 00:25:17.844 21:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.844 21:30:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.844 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.844 21:30:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:17.844 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:25:17.844 21:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.844 21:30:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.844 21:30:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.844 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.844 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:25:17.844 21:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.844 21:30:32 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:17.844 21:30:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:17.844 21:30:32 -- host/auth.sh@44 -- # digest=sha256 00:25:17.844 21:30:32 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:17.844 21:30:32 -- host/auth.sh@44 -- # keyid=1 00:25:17.844 21:30:32 -- host/auth.sh@45 -- # key=DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:17.844 21:30:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:17.844 21:30:32 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:17.844 21:30:32 -- host/auth.sh@49 -- # echo DHHC-1:00:ODg0OTgyN2JiYTA4ZmZhN2FiNGUzNzdkNDU2MGNjYTcwYWJhNzdhN2M0ZTM4ODA1gO84og==: 00:25:17.844 21:30:32 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:17.844 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.844 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:25:17.844 21:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.844 21:30:32 -- host/auth.sh@119 -- # get_main_ns_ip 00:25:17.844 21:30:32 -- nvmf/common.sh@717 -- # local ip 00:25:17.844 21:30:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:17.844 21:30:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:17.844 21:30:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.844 21:30:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.844 21:30:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:17.844 21:30:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.844 21:30:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:17.845 21:30:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:17.845 21:30:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:17.845 21:30:32 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:17.845 21:30:32 -- common/autotest_common.sh@638 -- # local es=0 00:25:17.845 21:30:32 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:17.845 21:30:32 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:17.845 21:30:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:17.845 21:30:32 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:17.845 21:30:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:17.845 21:30:32 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:17.845 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.845 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:25:17.845 request: 00:25:17.845 { 00:25:17.845 "name": "nvme0", 00:25:17.845 "trtype": "tcp", 00:25:17.845 "traddr": "10.0.0.1", 00:25:17.845 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:17.845 "adrfam": "ipv4", 00:25:17.845 "trsvcid": "4420", 00:25:17.845 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:17.845 "method": "bdev_nvme_attach_controller", 00:25:17.845 "req_id": 1 00:25:17.845 } 00:25:17.845 Got JSON-RPC error response 00:25:17.845 response: 00:25:17.845 { 00:25:17.845 "code": -32602, 00:25:17.845 "message": "Invalid parameters" 00:25:17.845 } 00:25:17.845 21:30:32 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:17.845 21:30:32 -- common/autotest_common.sh@641 -- # es=1 00:25:17.845 21:30:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:17.845 21:30:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:17.845 21:30:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:17.845 21:30:32 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.845 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.845 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:25:17.845 21:30:32 -- host/auth.sh@121 -- # jq length 00:25:17.845 21:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.845 21:30:32 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:25:17.845 21:30:32 -- host/auth.sh@124 -- # get_main_ns_ip 00:25:17.845 21:30:32 -- nvmf/common.sh@717 -- # local ip 00:25:17.845 21:30:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:17.845 21:30:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:17.845 21:30:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.845 21:30:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.845 21:30:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:17.845 21:30:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.845 21:30:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:17.845 21:30:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:17.845 21:30:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:17.845 21:30:32 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:17.845 21:30:32 -- common/autotest_common.sh@638 -- # local es=0 00:25:17.845 21:30:32 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:17.845 21:30:32 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:17.845 21:30:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:17.845 21:30:32 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:17.845 21:30:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:17.845 21:30:32 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:17.845 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.845 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:25:17.845 request: 00:25:17.845 { 00:25:17.845 "name": "nvme0", 00:25:17.845 "trtype": "tcp", 00:25:17.845 "traddr": "10.0.0.1", 00:25:17.845 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:17.845 "adrfam": "ipv4", 00:25:17.845 "trsvcid": "4420", 00:25:17.845 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:17.845 "dhchap_key": "key2", 00:25:17.845 "method": "bdev_nvme_attach_controller", 00:25:17.845 "req_id": 1 00:25:17.845 } 00:25:17.845 Got JSON-RPC error response 00:25:17.845 response: 00:25:17.845 { 00:25:17.845 "code": -32602, 00:25:17.845 "message": "Invalid parameters" 00:25:17.845 } 00:25:17.845 21:30:32 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:17.845 21:30:32 -- common/autotest_common.sh@641 -- # es=1 00:25:17.845 21:30:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:17.845 21:30:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:17.845 21:30:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:17.845 21:30:32 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.845 21:30:32 -- host/auth.sh@127 -- # jq length 00:25:17.845 21:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:17.845 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:25:17.845 21:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:17.845 21:30:32 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:25:17.845 21:30:32 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:17.845 21:30:32 -- host/auth.sh@130 -- # cleanup 00:25:17.845 21:30:32 -- host/auth.sh@24 -- # nvmftestfini 00:25:17.845 21:30:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:17.845 21:30:32 -- nvmf/common.sh@117 -- # sync 00:25:17.845 21:30:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:17.845 21:30:32 -- nvmf/common.sh@120 -- # set +e 00:25:17.845 21:30:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:17.845 21:30:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:17.845 rmmod nvme_tcp 00:25:17.845 rmmod nvme_fabrics 00:25:17.845 21:30:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:17.845 21:30:32 -- nvmf/common.sh@124 -- # set -e 00:25:17.845 21:30:32 -- nvmf/common.sh@125 -- # return 0 00:25:17.845 21:30:32 -- nvmf/common.sh@478 -- # '[' -n 1336641 ']' 00:25:17.845 21:30:32 -- nvmf/common.sh@479 -- # killprocess 1336641 00:25:17.845 21:30:32 -- common/autotest_common.sh@936 -- # '[' -z 1336641 ']' 00:25:17.845 21:30:32 -- common/autotest_common.sh@940 -- # kill -0 1336641 00:25:17.845 21:30:32 -- common/autotest_common.sh@941 -- # uname 00:25:17.845 21:30:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:17.845 21:30:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1336641 00:25:18.103 21:30:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:18.103 21:30:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:18.103 21:30:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1336641' 00:25:18.103 killing process with pid 1336641 00:25:18.103 21:30:32 -- common/autotest_common.sh@955 -- # kill 1336641 00:25:18.103 21:30:32 -- common/autotest_common.sh@960 -- # wait 1336641 00:25:18.361 21:30:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:18.361 21:30:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:18.361 21:30:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:18.361 21:30:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.361 21:30:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:18.361 21:30:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.361 21:30:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.361 21:30:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.902 21:30:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:20.902 21:30:35 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:20.902 21:30:35 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:20.902 21:30:35 -- host/auth.sh@27 -- # clean_kernel_target 00:25:20.902 21:30:35 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:20.902 21:30:35 -- nvmf/common.sh@675 -- # echo 0 00:25:20.902 21:30:35 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:20.902 21:30:35 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:20.902 21:30:35 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:20.902 21:30:35 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:20.903 21:30:35 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:25:20.903 21:30:35 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:25:20.903 21:30:35 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:25:23.434 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:23.434 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:23.434 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:23.434 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:25:23.434 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:23.434 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:25:23.434 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:23.434 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:25:23.692 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:23.692 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:25:23.692 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:25:23.692 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:25:23.692 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:23.692 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:25:23.692 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:25:23.692 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:25:25.600 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:25:25.600 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:25:25.600 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:25:26.167 21:30:40 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fWH /tmp/spdk.key-null.YDo /tmp/spdk.key-sha256.Lfk /tmp/spdk.key-sha384.L5D /tmp/spdk.key-sha512.wZT /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log 00:25:26.167 21:30:40 -- host/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:25:28.710 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:28.710 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:28.710 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:28.710 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:28.710 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:28.710 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:28.710 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:28.710 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:28.710 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:28.710 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:28.710 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:28.710 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:28.710 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:28.710 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:28.710 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:28.710 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:28.710 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:28.710 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:25:28.710 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:25:28.971 00:25:28.971 real 0m51.642s 00:25:28.971 user 0m42.442s 00:25:28.971 sys 0m12.046s 00:25:28.971 21:30:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:28.971 21:30:43 -- common/autotest_common.sh@10 -- # set +x 00:25:28.971 ************************************ 00:25:28.971 END TEST nvmf_auth 00:25:28.971 ************************************ 00:25:28.971 21:30:43 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:25:28.971 21:30:43 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:28.971 21:30:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:28.971 21:30:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:28.971 21:30:43 -- common/autotest_common.sh@10 -- # set +x 00:25:29.231 ************************************ 00:25:29.231 START TEST nvmf_digest 00:25:29.231 ************************************ 00:25:29.231 21:30:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:29.231 * Looking for test storage... 00:25:29.231 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:25:29.231 21:30:44 -- host/digest.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.231 21:30:44 -- nvmf/common.sh@7 -- # uname -s 00:25:29.231 21:30:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.231 21:30:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.231 21:30:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.231 21:30:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.231 21:30:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.231 21:30:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.231 21:30:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.231 21:30:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.231 21:30:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.231 21:30:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.231 21:30:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:25:29.231 21:30:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:25:29.231 21:30:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.231 21:30:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.231 21:30:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:29.231 21:30:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.231 21:30:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:25:29.231 21:30:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.231 21:30:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.231 21:30:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.231 21:30:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.231 21:30:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.231 21:30:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.231 21:30:44 -- paths/export.sh@5 -- # export PATH 00:25:29.231 21:30:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.231 21:30:44 -- nvmf/common.sh@47 -- # : 0 00:25:29.231 21:30:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:29.231 21:30:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:29.231 21:30:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.231 21:30:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.231 21:30:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.231 21:30:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:29.231 21:30:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:29.231 21:30:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:29.231 21:30:44 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:29.231 21:30:44 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:29.231 21:30:44 -- host/digest.sh@16 -- # runtime=2 00:25:29.231 21:30:44 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:29.231 21:30:44 -- host/digest.sh@138 -- # nvmftestinit 00:25:29.231 21:30:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:29.231 21:30:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.231 21:30:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:29.231 21:30:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:29.231 21:30:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:29.231 21:30:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.231 21:30:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.231 21:30:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.231 21:30:44 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:25:29.231 21:30:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:29.231 21:30:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:29.231 21:30:44 -- common/autotest_common.sh@10 -- # set +x 00:25:35.890 21:30:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:35.890 21:30:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:35.890 21:30:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:35.890 21:30:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:35.890 21:30:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:35.890 21:30:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:35.890 21:30:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:35.890 21:30:49 -- nvmf/common.sh@295 -- # net_devs=() 00:25:35.890 21:30:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:35.890 21:30:49 -- nvmf/common.sh@296 -- # e810=() 00:25:35.890 21:30:49 -- nvmf/common.sh@296 -- # local -ga e810 00:25:35.890 21:30:49 -- nvmf/common.sh@297 -- # x722=() 00:25:35.890 21:30:49 -- nvmf/common.sh@297 -- # local -ga x722 00:25:35.890 21:30:49 -- nvmf/common.sh@298 -- # mlx=() 00:25:35.890 21:30:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:35.890 21:30:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:35.890 21:30:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:35.890 21:30:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:35.890 21:30:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:35.890 21:30:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:35.890 21:30:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:35.890 21:30:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:35.890 21:30:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:35.890 21:30:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:35.890 21:30:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:35.890 21:30:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:35.890 21:30:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:35.890 21:30:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:35.890 21:30:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:35.890 21:30:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:35.890 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:35.890 21:30:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:35.890 21:30:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:35.890 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:35.890 21:30:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:35.890 21:30:49 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:35.890 21:30:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.890 21:30:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:35.890 21:30:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.890 21:30:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:35.890 Found net devices under 0000:27:00.0: cvl_0_0 00:25:35.890 21:30:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.890 21:30:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:35.890 21:30:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.890 21:30:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:35.890 21:30:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.890 21:30:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:35.890 Found net devices under 0000:27:00.1: cvl_0_1 00:25:35.890 21:30:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.890 21:30:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:35.890 21:30:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:35.890 21:30:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:35.890 21:30:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.890 21:30:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.890 21:30:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:35.890 21:30:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:35.890 21:30:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:35.890 21:30:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:35.890 21:30:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:35.890 21:30:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:35.890 21:30:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.890 21:30:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:35.890 21:30:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:35.890 21:30:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:35.890 21:30:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:35.890 21:30:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:35.890 21:30:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:35.890 21:30:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:35.890 21:30:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:35.890 21:30:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:35.890 21:30:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:35.890 21:30:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:35.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:35.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:25:35.890 00:25:35.890 --- 10.0.0.2 ping statistics --- 00:25:35.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.890 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:25:35.890 21:30:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:35.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:35.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:25:35.890 00:25:35.890 --- 10.0.0.1 ping statistics --- 00:25:35.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.890 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:25:35.890 21:30:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:35.890 21:30:49 -- nvmf/common.sh@411 -- # return 0 00:25:35.890 21:30:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:35.890 21:30:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:35.890 21:30:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:35.890 21:30:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:35.891 21:30:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:35.891 21:30:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:35.891 21:30:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:35.891 21:30:49 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:35.891 21:30:49 -- host/digest.sh@141 -- # [[ 1 -eq 1 ]] 00:25:35.891 21:30:49 -- host/digest.sh@142 -- # run_test nvmf_digest_dsa_initiator run_digest dsa_initiator 00:25:35.891 21:30:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:35.891 21:30:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:35.891 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:25:35.891 ************************************ 00:25:35.891 START TEST nvmf_digest_dsa_initiator 00:25:35.891 ************************************ 00:25:35.891 21:30:49 -- common/autotest_common.sh@1111 -- # run_digest dsa_initiator 00:25:35.891 21:30:49 -- host/digest.sh@120 -- # local dsa_initiator 00:25:35.891 21:30:49 -- host/digest.sh@121 -- # [[ dsa_initiator == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:35.891 21:30:49 -- host/digest.sh@121 -- # dsa_initiator=true 00:25:35.891 21:30:49 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:35.891 21:30:49 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:35.891 21:30:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:35.891 21:30:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:35.891 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:25:35.891 21:30:49 -- nvmf/common.sh@470 -- # nvmfpid=1353018 00:25:35.891 21:30:49 -- nvmf/common.sh@471 -- # waitforlisten 1353018 00:25:35.891 21:30:49 -- common/autotest_common.sh@817 -- # '[' -z 1353018 ']' 00:25:35.891 21:30:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.891 21:30:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:35.891 21:30:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.891 21:30:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:35.891 21:30:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:35.891 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:25:35.891 [2024-04-24 21:30:49.944083] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:25:35.891 [2024-04-24 21:30:49.944194] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.891 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.891 [2024-04-24 21:30:50.081581] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.891 [2024-04-24 21:30:50.183707] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.891 [2024-04-24 21:30:50.183747] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.891 [2024-04-24 21:30:50.183758] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.891 [2024-04-24 21:30:50.183768] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.891 [2024-04-24 21:30:50.183776] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.891 [2024-04-24 21:30:50.183803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.891 21:30:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:35.891 21:30:50 -- common/autotest_common.sh@850 -- # return 0 00:25:35.891 21:30:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:35.891 21:30:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:35.891 21:30:50 -- common/autotest_common.sh@10 -- # set +x 00:25:35.891 21:30:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.891 21:30:50 -- host/digest.sh@125 -- # [[ dsa_initiator == \d\s\a\_\t\a\r\g\e\t ]] 00:25:35.891 21:30:50 -- host/digest.sh@126 -- # common_target_config 00:25:35.891 21:30:50 -- host/digest.sh@43 -- # rpc_cmd 00:25:35.891 21:30:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.891 21:30:50 -- common/autotest_common.sh@10 -- # set +x 00:25:35.891 null0 00:25:35.891 [2024-04-24 21:30:50.821809] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.891 [2024-04-24 21:30:50.845950] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.173 21:30:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.173 21:30:50 -- host/digest.sh@128 -- # run_bperf randread 4096 128 true 00:25:36.173 21:30:50 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:36.173 21:30:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:36.173 21:30:50 -- host/digest.sh@80 -- # rw=randread 00:25:36.173 21:30:50 -- host/digest.sh@80 -- # bs=4096 00:25:36.173 21:30:50 -- host/digest.sh@80 -- # qd=128 00:25:36.173 21:30:50 -- host/digest.sh@80 -- # scan_dsa=true 00:25:36.173 21:30:50 -- host/digest.sh@83 -- # bperfpid=1353189 00:25:36.173 21:30:50 -- host/digest.sh@84 -- # waitforlisten 1353189 /var/tmp/bperf.sock 00:25:36.173 21:30:50 -- common/autotest_common.sh@817 -- # '[' -z 1353189 ']' 00:25:36.173 21:30:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:36.173 21:30:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:36.173 21:30:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:36.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:36.173 21:30:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:36.173 21:30:50 -- common/autotest_common.sh@10 -- # set +x 00:25:36.173 21:30:50 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:36.173 [2024-04-24 21:30:50.923995] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:25:36.173 [2024-04-24 21:30:50.924102] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353189 ] 00:25:36.173 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.173 [2024-04-24 21:30:51.040434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.173 [2024-04-24 21:30:51.135985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.740 21:30:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:36.740 21:30:51 -- common/autotest_common.sh@850 -- # return 0 00:25:36.740 21:30:51 -- host/digest.sh@86 -- # true 00:25:36.740 21:30:51 -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:25:36.740 21:30:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:25:37.001 [2024-04-24 21:30:51.736515] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:25:37.001 21:30:51 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:37.001 21:30:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:45.128 21:30:58 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.128 21:30:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.128 nvme0n1 00:25:45.128 21:30:59 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:45.128 21:30:59 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:45.128 Running I/O for 2 seconds... 00:25:46.509 00:25:46.509 Latency(us) 00:25:46.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.509 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:46.509 nvme0n1 : 2.00 22842.84 89.23 0.00 0.00 5597.29 2862.89 13107.20 00:25:46.509 =================================================================================================================== 00:25:46.509 Total : 22842.84 89.23 0.00 0.00 5597.29 2862.89 13107.20 00:25:46.509 0 00:25:46.509 21:31:01 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:46.509 21:31:01 -- host/digest.sh@93 -- # get_accel_stats 00:25:46.509 21:31:01 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:46.509 21:31:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:46.509 21:31:01 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:46.509 | select(.opcode=="crc32c") 00:25:46.509 | "\(.module_name) \(.executed)"' 00:25:46.509 21:31:01 -- host/digest.sh@94 -- # true 00:25:46.509 21:31:01 -- host/digest.sh@94 -- # exp_module=dsa 00:25:46.509 21:31:01 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:46.509 21:31:01 -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:25:46.509 21:31:01 -- host/digest.sh@98 -- # killprocess 1353189 00:25:46.509 21:31:01 -- common/autotest_common.sh@936 -- # '[' -z 1353189 ']' 00:25:46.509 21:31:01 -- common/autotest_common.sh@940 -- # kill -0 1353189 00:25:46.509 21:31:01 -- common/autotest_common.sh@941 -- # uname 00:25:46.509 21:31:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:46.509 21:31:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1353189 00:25:46.509 21:31:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:46.509 21:31:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:46.509 21:31:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1353189' 00:25:46.509 killing process with pid 1353189 00:25:46.509 21:31:01 -- common/autotest_common.sh@955 -- # kill 1353189 00:25:46.509 Received shutdown signal, test time was about 2.000000 seconds 00:25:46.509 00:25:46.509 Latency(us) 00:25:46.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.509 =================================================================================================================== 00:25:46.509 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:46.509 21:31:01 -- common/autotest_common.sh@960 -- # wait 1353189 00:25:49.048 21:31:03 -- host/digest.sh@129 -- # run_bperf randread 131072 16 true 00:25:49.048 21:31:03 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:49.048 21:31:03 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:49.048 21:31:03 -- host/digest.sh@80 -- # rw=randread 00:25:49.048 21:31:03 -- host/digest.sh@80 -- # bs=131072 00:25:49.048 21:31:03 -- host/digest.sh@80 -- # qd=16 00:25:49.048 21:31:03 -- host/digest.sh@80 -- # scan_dsa=true 00:25:49.048 21:31:03 -- host/digest.sh@83 -- # bperfpid=1355578 00:25:49.048 21:31:03 -- host/digest.sh@84 -- # waitforlisten 1355578 /var/tmp/bperf.sock 00:25:49.048 21:31:03 -- common/autotest_common.sh@817 -- # '[' -z 1355578 ']' 00:25:49.048 21:31:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:49.048 21:31:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:49.048 21:31:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:49.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:49.048 21:31:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:49.048 21:31:03 -- common/autotest_common.sh@10 -- # set +x 00:25:49.048 21:31:03 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:49.048 [2024-04-24 21:31:03.667890] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:25:49.048 [2024-04-24 21:31:03.668000] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355578 ] 00:25:49.048 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:49.048 Zero copy mechanism will not be used. 00:25:49.048 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.048 [2024-04-24 21:31:03.778682] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.048 [2024-04-24 21:31:03.875353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.617 21:31:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:49.618 21:31:04 -- common/autotest_common.sh@850 -- # return 0 00:25:49.618 21:31:04 -- host/digest.sh@86 -- # true 00:25:49.618 21:31:04 -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:25:49.618 21:31:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:25:49.618 [2024-04-24 21:31:04.487869] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:25:49.618 21:31:04 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:49.618 21:31:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:57.741 21:31:11 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.741 21:31:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.741 nvme0n1 00:25:57.741 21:31:11 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:57.741 21:31:11 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:57.741 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:57.741 Zero copy mechanism will not be used. 00:25:57.741 Running I/O for 2 seconds... 00:25:59.124 00:25:59.124 Latency(us) 00:25:59.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.124 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:59.124 nvme0n1 : 2.00 5722.25 715.28 0.00 0.00 2793.41 845.07 5760.27 00:25:59.124 =================================================================================================================== 00:25:59.124 Total : 5722.25 715.28 0.00 0.00 2793.41 845.07 5760.27 00:25:59.124 0 00:25:59.124 21:31:13 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:59.124 21:31:13 -- host/digest.sh@93 -- # get_accel_stats 00:25:59.124 21:31:13 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:59.124 21:31:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:59.124 21:31:13 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:59.124 | select(.opcode=="crc32c") 00:25:59.124 | "\(.module_name) \(.executed)"' 00:25:59.124 21:31:13 -- host/digest.sh@94 -- # true 00:25:59.124 21:31:13 -- host/digest.sh@94 -- # exp_module=dsa 00:25:59.124 21:31:13 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:59.124 21:31:13 -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:25:59.124 21:31:13 -- host/digest.sh@98 -- # killprocess 1355578 00:25:59.124 21:31:13 -- common/autotest_common.sh@936 -- # '[' -z 1355578 ']' 00:25:59.124 21:31:13 -- common/autotest_common.sh@940 -- # kill -0 1355578 00:25:59.124 21:31:13 -- common/autotest_common.sh@941 -- # uname 00:25:59.124 21:31:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:59.124 21:31:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1355578 00:25:59.124 21:31:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:59.124 21:31:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:59.124 21:31:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1355578' 00:25:59.124 killing process with pid 1355578 00:25:59.124 21:31:13 -- common/autotest_common.sh@955 -- # kill 1355578 00:25:59.124 Received shutdown signal, test time was about 2.000000 seconds 00:25:59.124 00:25:59.124 Latency(us) 00:25:59.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.124 =================================================================================================================== 00:25:59.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.124 21:31:13 -- common/autotest_common.sh@960 -- # wait 1355578 00:26:01.663 21:31:16 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 true 00:26:01.663 21:31:16 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:01.663 21:31:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:01.663 21:31:16 -- host/digest.sh@80 -- # rw=randwrite 00:26:01.663 21:31:16 -- host/digest.sh@80 -- # bs=4096 00:26:01.663 21:31:16 -- host/digest.sh@80 -- # qd=128 00:26:01.663 21:31:16 -- host/digest.sh@80 -- # scan_dsa=true 00:26:01.663 21:31:16 -- host/digest.sh@83 -- # bperfpid=1357976 00:26:01.663 21:31:16 -- host/digest.sh@84 -- # waitforlisten 1357976 /var/tmp/bperf.sock 00:26:01.663 21:31:16 -- common/autotest_common.sh@817 -- # '[' -z 1357976 ']' 00:26:01.663 21:31:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:01.664 21:31:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:01.664 21:31:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:01.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:01.664 21:31:16 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:01.664 21:31:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:01.664 21:31:16 -- common/autotest_common.sh@10 -- # set +x 00:26:01.664 [2024-04-24 21:31:16.293737] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:26:01.664 [2024-04-24 21:31:16.293855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357976 ] 00:26:01.664 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.664 [2024-04-24 21:31:16.409238] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.664 [2024-04-24 21:31:16.506610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.231 21:31:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:02.231 21:31:16 -- common/autotest_common.sh@850 -- # return 0 00:26:02.231 21:31:16 -- host/digest.sh@86 -- # true 00:26:02.231 21:31:16 -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:26:02.231 21:31:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:26:02.231 [2024-04-24 21:31:17.103123] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:26:02.231 21:31:17 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:02.231 21:31:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:10.349 21:31:24 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.349 21:31:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.349 nvme0n1 00:26:10.349 21:31:24 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:10.349 21:31:24 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.349 Running I/O for 2 seconds... 00:26:11.727 00:26:11.728 Latency(us) 00:26:11.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.728 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:11.728 nvme0n1 : 2.00 26997.31 105.46 0.00 0.00 4731.70 2311.01 11382.57 00:26:11.728 =================================================================================================================== 00:26:11.728 Total : 26997.31 105.46 0.00 0.00 4731.70 2311.01 11382.57 00:26:11.728 0 00:26:11.728 21:31:26 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:11.728 21:31:26 -- host/digest.sh@93 -- # get_accel_stats 00:26:11.728 21:31:26 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:11.728 21:31:26 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:11.728 | select(.opcode=="crc32c") 00:26:11.728 | "\(.module_name) \(.executed)"' 00:26:11.728 21:31:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:11.728 21:31:26 -- host/digest.sh@94 -- # true 00:26:11.728 21:31:26 -- host/digest.sh@94 -- # exp_module=dsa 00:26:11.728 21:31:26 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:11.728 21:31:26 -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:26:11.728 21:31:26 -- host/digest.sh@98 -- # killprocess 1357976 00:26:11.728 21:31:26 -- common/autotest_common.sh@936 -- # '[' -z 1357976 ']' 00:26:11.728 21:31:26 -- common/autotest_common.sh@940 -- # kill -0 1357976 00:26:11.728 21:31:26 -- common/autotest_common.sh@941 -- # uname 00:26:11.728 21:31:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:11.728 21:31:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1357976 00:26:11.728 21:31:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:11.728 21:31:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:11.728 21:31:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1357976' 00:26:11.728 killing process with pid 1357976 00:26:11.728 21:31:26 -- common/autotest_common.sh@955 -- # kill 1357976 00:26:11.728 Received shutdown signal, test time was about 2.000000 seconds 00:26:11.728 00:26:11.728 Latency(us) 00:26:11.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.728 =================================================================================================================== 00:26:11.728 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.728 21:31:26 -- common/autotest_common.sh@960 -- # wait 1357976 00:26:14.265 21:31:28 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 true 00:26:14.265 21:31:28 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:14.265 21:31:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:14.265 21:31:28 -- host/digest.sh@80 -- # rw=randwrite 00:26:14.265 21:31:28 -- host/digest.sh@80 -- # bs=131072 00:26:14.266 21:31:28 -- host/digest.sh@80 -- # qd=16 00:26:14.266 21:31:28 -- host/digest.sh@80 -- # scan_dsa=true 00:26:14.266 21:31:28 -- host/digest.sh@83 -- # bperfpid=1360361 00:26:14.266 21:31:28 -- host/digest.sh@84 -- # waitforlisten 1360361 /var/tmp/bperf.sock 00:26:14.266 21:31:28 -- common/autotest_common.sh@817 -- # '[' -z 1360361 ']' 00:26:14.266 21:31:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:14.266 21:31:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:14.266 21:31:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:14.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:14.266 21:31:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:14.266 21:31:28 -- common/autotest_common.sh@10 -- # set +x 00:26:14.266 21:31:28 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:14.266 [2024-04-24 21:31:28.938226] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:26:14.266 [2024-04-24 21:31:28.938367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360361 ] 00:26:14.266 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:14.266 Zero copy mechanism will not be used. 00:26:14.266 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.266 [2024-04-24 21:31:29.066695] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.266 [2024-04-24 21:31:29.157808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.833 21:31:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:14.833 21:31:29 -- common/autotest_common.sh@850 -- # return 0 00:26:14.833 21:31:29 -- host/digest.sh@86 -- # true 00:26:14.833 21:31:29 -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:26:14.833 21:31:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:26:14.833 [2024-04-24 21:31:29.750322] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:26:14.833 21:31:29 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:14.833 21:31:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:22.960 21:31:36 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.960 21:31:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.960 nvme0n1 00:26:22.960 21:31:37 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:22.960 21:31:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:22.960 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:22.960 Zero copy mechanism will not be used. 00:26:22.960 Running I/O for 2 seconds... 00:26:24.342 00:26:24.342 Latency(us) 00:26:24.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.342 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:24.342 nvme0n1 : 2.00 5316.07 664.51 0.00 0.00 3005.94 1836.73 6726.06 00:26:24.342 =================================================================================================================== 00:26:24.342 Total : 5316.07 664.51 0.00 0.00 3005.94 1836.73 6726.06 00:26:24.342 0 00:26:24.342 21:31:39 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:24.342 21:31:39 -- host/digest.sh@93 -- # get_accel_stats 00:26:24.342 21:31:39 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:24.342 21:31:39 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:24.342 | select(.opcode=="crc32c") 00:26:24.342 | "\(.module_name) \(.executed)"' 00:26:24.342 21:31:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:24.342 21:31:39 -- host/digest.sh@94 -- # true 00:26:24.342 21:31:39 -- host/digest.sh@94 -- # exp_module=dsa 00:26:24.342 21:31:39 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:24.342 21:31:39 -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:26:24.342 21:31:39 -- host/digest.sh@98 -- # killprocess 1360361 00:26:24.342 21:31:39 -- common/autotest_common.sh@936 -- # '[' -z 1360361 ']' 00:26:24.342 21:31:39 -- common/autotest_common.sh@940 -- # kill -0 1360361 00:26:24.342 21:31:39 -- common/autotest_common.sh@941 -- # uname 00:26:24.342 21:31:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:24.342 21:31:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1360361 00:26:24.601 21:31:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:24.601 21:31:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:24.601 21:31:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1360361' 00:26:24.601 killing process with pid 1360361 00:26:24.601 21:31:39 -- common/autotest_common.sh@955 -- # kill 1360361 00:26:24.601 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.601 00:26:24.601 Latency(us) 00:26:24.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.601 =================================================================================================================== 00:26:24.601 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.601 21:31:39 -- common/autotest_common.sh@960 -- # wait 1360361 00:26:27.141 21:31:41 -- host/digest.sh@132 -- # killprocess 1353018 00:26:27.141 21:31:41 -- common/autotest_common.sh@936 -- # '[' -z 1353018 ']' 00:26:27.141 21:31:41 -- common/autotest_common.sh@940 -- # kill -0 1353018 00:26:27.141 21:31:41 -- common/autotest_common.sh@941 -- # uname 00:26:27.141 21:31:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:27.141 21:31:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1353018 00:26:27.141 21:31:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:27.141 21:31:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:27.141 21:31:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1353018' 00:26:27.141 killing process with pid 1353018 00:26:27.141 21:31:41 -- common/autotest_common.sh@955 -- # kill 1353018 00:26:27.141 21:31:41 -- common/autotest_common.sh@960 -- # wait 1353018 00:26:27.401 00:26:27.401 real 0m52.318s 00:26:27.401 user 1m12.680s 00:26:27.401 sys 0m3.492s 00:26:27.401 21:31:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:27.401 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:26:27.401 ************************************ 00:26:27.401 END TEST nvmf_digest_dsa_initiator 00:26:27.401 ************************************ 00:26:27.401 21:31:42 -- host/digest.sh@143 -- # run_test nvmf_digest_dsa_target run_digest dsa_target 00:26:27.401 21:31:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:27.401 21:31:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:27.401 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:26:27.401 ************************************ 00:26:27.401 START TEST nvmf_digest_dsa_target 00:26:27.401 ************************************ 00:26:27.401 21:31:42 -- common/autotest_common.sh@1111 -- # run_digest dsa_target 00:26:27.401 21:31:42 -- host/digest.sh@120 -- # local dsa_initiator 00:26:27.401 21:31:42 -- host/digest.sh@121 -- # [[ dsa_target == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:27.401 21:31:42 -- host/digest.sh@121 -- # dsa_initiator=false 00:26:27.401 21:31:42 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:27.401 21:31:42 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:27.401 21:31:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:27.401 21:31:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:27.401 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:26:27.401 21:31:42 -- nvmf/common.sh@470 -- # nvmfpid=1363074 00:26:27.401 21:31:42 -- nvmf/common.sh@471 -- # waitforlisten 1363074 00:26:27.401 21:31:42 -- common/autotest_common.sh@817 -- # '[' -z 1363074 ']' 00:26:27.401 21:31:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.401 21:31:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:27.401 21:31:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.401 21:31:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:27.401 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:26:27.401 21:31:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:27.662 [2024-04-24 21:31:42.375755] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:26:27.662 [2024-04-24 21:31:42.375854] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.662 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.662 [2024-04-24 21:31:42.497553] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.662 [2024-04-24 21:31:42.592535] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.662 [2024-04-24 21:31:42.592568] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.662 [2024-04-24 21:31:42.592578] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.662 [2024-04-24 21:31:42.592587] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.662 [2024-04-24 21:31:42.592594] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.662 [2024-04-24 21:31:42.592619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.235 21:31:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:28.235 21:31:43 -- common/autotest_common.sh@850 -- # return 0 00:26:28.235 21:31:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:28.235 21:31:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:28.235 21:31:43 -- common/autotest_common.sh@10 -- # set +x 00:26:28.235 21:31:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.235 21:31:43 -- host/digest.sh@125 -- # [[ dsa_target == \d\s\a\_\t\a\r\g\e\t ]] 00:26:28.235 21:31:43 -- host/digest.sh@125 -- # rpc_cmd dsa_scan_accel_module 00:26:28.235 21:31:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.235 21:31:43 -- common/autotest_common.sh@10 -- # set +x 00:26:28.235 [2024-04-24 21:31:43.101061] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:26:28.235 21:31:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.235 21:31:43 -- host/digest.sh@126 -- # common_target_config 00:26:28.235 21:31:43 -- host/digest.sh@43 -- # rpc_cmd 00:26:28.235 21:31:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.235 21:31:43 -- common/autotest_common.sh@10 -- # set +x 00:26:36.439 null0 00:26:36.439 [2024-04-24 21:31:49.946546] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.439 [2024-04-24 21:31:49.973440] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.439 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:36.439 21:31:49 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:36.439 21:31:49 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:36.439 21:31:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:36.439 21:31:49 -- host/digest.sh@80 -- # rw=randread 00:26:36.439 21:31:49 -- host/digest.sh@80 -- # bs=4096 00:26:36.439 21:31:49 -- host/digest.sh@80 -- # qd=128 00:26:36.439 21:31:49 -- host/digest.sh@80 -- # scan_dsa=false 00:26:36.439 21:31:49 -- host/digest.sh@83 -- # bperfpid=1364583 00:26:36.439 21:31:49 -- host/digest.sh@84 -- # waitforlisten 1364583 /var/tmp/bperf.sock 00:26:36.439 21:31:49 -- common/autotest_common.sh@817 -- # '[' -z 1364583 ']' 00:26:36.439 21:31:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:36.439 21:31:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:36.439 21:31:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:36.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:36.439 21:31:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:36.439 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:26:36.439 21:31:49 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:36.439 [2024-04-24 21:31:50.049355] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:26:36.439 [2024-04-24 21:31:50.049456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1364583 ] 00:26:36.439 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.439 [2024-04-24 21:31:50.144624] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.439 [2024-04-24 21:31:50.233767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.439 21:31:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:36.439 21:31:50 -- common/autotest_common.sh@850 -- # return 0 00:26:36.439 21:31:50 -- host/digest.sh@86 -- # false 00:26:36.439 21:31:50 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:36.439 21:31:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:36.439 21:31:51 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.439 21:31:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.439 nvme0n1 00:26:36.439 21:31:51 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:36.439 21:31:51 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:36.698 Running I/O for 2 seconds... 00:26:38.605 00:26:38.605 Latency(us) 00:26:38.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.605 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:38.605 nvme0n1 : 2.04 21324.60 83.30 0.00 0.00 5877.45 2173.04 46634.04 00:26:38.605 =================================================================================================================== 00:26:38.605 Total : 21324.60 83.30 0.00 0.00 5877.45 2173.04 46634.04 00:26:38.605 0 00:26:38.605 21:31:53 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:38.605 21:31:53 -- host/digest.sh@93 -- # get_accel_stats 00:26:38.605 21:31:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:38.605 21:31:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:38.605 | select(.opcode=="crc32c") 00:26:38.605 | "\(.module_name) \(.executed)"' 00:26:38.605 21:31:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:38.865 21:31:53 -- host/digest.sh@94 -- # false 00:26:38.865 21:31:53 -- host/digest.sh@94 -- # exp_module=software 00:26:38.865 21:31:53 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:38.865 21:31:53 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:38.865 21:31:53 -- host/digest.sh@98 -- # killprocess 1364583 00:26:38.865 21:31:53 -- common/autotest_common.sh@936 -- # '[' -z 1364583 ']' 00:26:38.865 21:31:53 -- common/autotest_common.sh@940 -- # kill -0 1364583 00:26:38.865 21:31:53 -- common/autotest_common.sh@941 -- # uname 00:26:38.865 21:31:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:38.865 21:31:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1364583 00:26:38.865 21:31:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:38.865 21:31:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:38.865 21:31:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1364583' 00:26:38.865 killing process with pid 1364583 00:26:38.865 21:31:53 -- common/autotest_common.sh@955 -- # kill 1364583 00:26:38.865 Received shutdown signal, test time was about 2.000000 seconds 00:26:38.865 00:26:38.865 Latency(us) 00:26:38.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.865 =================================================================================================================== 00:26:38.865 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.866 21:31:53 -- common/autotest_common.sh@960 -- # wait 1364583 00:26:39.126 21:31:54 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:39.126 21:31:54 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:39.126 21:31:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:39.126 21:31:54 -- host/digest.sh@80 -- # rw=randread 00:26:39.126 21:31:54 -- host/digest.sh@80 -- # bs=131072 00:26:39.126 21:31:54 -- host/digest.sh@80 -- # qd=16 00:26:39.126 21:31:54 -- host/digest.sh@80 -- # scan_dsa=false 00:26:39.126 21:31:54 -- host/digest.sh@83 -- # bperfpid=1365273 00:26:39.126 21:31:54 -- host/digest.sh@84 -- # waitforlisten 1365273 /var/tmp/bperf.sock 00:26:39.126 21:31:54 -- common/autotest_common.sh@817 -- # '[' -z 1365273 ']' 00:26:39.126 21:31:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:39.127 21:31:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:39.127 21:31:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:39.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:39.127 21:31:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:39.127 21:31:54 -- common/autotest_common.sh@10 -- # set +x 00:26:39.127 21:31:54 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:39.385 [2024-04-24 21:31:54.154939] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:26:39.385 [2024-04-24 21:31:54.155080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1365273 ] 00:26:39.385 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:39.385 Zero copy mechanism will not be used. 00:26:39.385 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.385 [2024-04-24 21:31:54.286002] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.644 [2024-04-24 21:31:54.375011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.902 21:31:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:39.902 21:31:54 -- common/autotest_common.sh@850 -- # return 0 00:26:39.902 21:31:54 -- host/digest.sh@86 -- # false 00:26:39.902 21:31:54 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:39.902 21:31:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:40.161 21:31:55 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.161 21:31:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.421 nvme0n1 00:26:40.421 21:31:55 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:40.421 21:31:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:40.682 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:40.682 Zero copy mechanism will not be used. 00:26:40.682 Running I/O for 2 seconds... 00:26:42.589 00:26:42.589 Latency(us) 00:26:42.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.589 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:42.590 nvme0n1 : 2.00 5966.65 745.83 0.00 0.00 2678.99 569.13 13797.05 00:26:42.590 =================================================================================================================== 00:26:42.590 Total : 5966.65 745.83 0.00 0.00 2678.99 569.13 13797.05 00:26:42.590 0 00:26:42.590 21:31:57 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:42.590 21:31:57 -- host/digest.sh@93 -- # get_accel_stats 00:26:42.590 21:31:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:42.590 21:31:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:42.590 | select(.opcode=="crc32c") 00:26:42.590 | "\(.module_name) \(.executed)"' 00:26:42.590 21:31:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:42.851 21:31:57 -- host/digest.sh@94 -- # false 00:26:42.851 21:31:57 -- host/digest.sh@94 -- # exp_module=software 00:26:42.851 21:31:57 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:42.851 21:31:57 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:42.851 21:31:57 -- host/digest.sh@98 -- # killprocess 1365273 00:26:42.851 21:31:57 -- common/autotest_common.sh@936 -- # '[' -z 1365273 ']' 00:26:42.851 21:31:57 -- common/autotest_common.sh@940 -- # kill -0 1365273 00:26:42.851 21:31:57 -- common/autotest_common.sh@941 -- # uname 00:26:42.851 21:31:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:42.851 21:31:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1365273 00:26:42.851 21:31:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:42.851 21:31:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:42.851 21:31:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1365273' 00:26:42.851 killing process with pid 1365273 00:26:42.851 21:31:57 -- common/autotest_common.sh@955 -- # kill 1365273 00:26:42.851 Received shutdown signal, test time was about 2.000000 seconds 00:26:42.851 00:26:42.851 Latency(us) 00:26:42.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.851 =================================================================================================================== 00:26:42.851 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:42.851 21:31:57 -- common/autotest_common.sh@960 -- # wait 1365273 00:26:43.112 21:31:57 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:43.112 21:31:57 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:43.112 21:31:57 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:43.112 21:31:57 -- host/digest.sh@80 -- # rw=randwrite 00:26:43.112 21:31:57 -- host/digest.sh@80 -- # bs=4096 00:26:43.112 21:31:57 -- host/digest.sh@80 -- # qd=128 00:26:43.112 21:31:57 -- host/digest.sh@80 -- # scan_dsa=false 00:26:43.112 21:31:57 -- host/digest.sh@83 -- # bperfpid=1366099 00:26:43.112 21:31:57 -- host/digest.sh@84 -- # waitforlisten 1366099 /var/tmp/bperf.sock 00:26:43.112 21:31:57 -- common/autotest_common.sh@817 -- # '[' -z 1366099 ']' 00:26:43.112 21:31:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:43.112 21:31:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:43.112 21:31:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:43.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:43.112 21:31:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:43.112 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:26:43.112 21:31:57 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:43.112 [2024-04-24 21:31:58.064317] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:26:43.112 [2024-04-24 21:31:58.064438] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366099 ] 00:26:43.371 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.371 [2024-04-24 21:31:58.179684] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.371 [2024-04-24 21:31:58.269430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.943 21:31:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:43.943 21:31:58 -- common/autotest_common.sh@850 -- # return 0 00:26:43.943 21:31:58 -- host/digest.sh@86 -- # false 00:26:43.943 21:31:58 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:43.943 21:31:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:44.200 21:31:59 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.200 21:31:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.460 nvme0n1 00:26:44.460 21:31:59 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:44.460 21:31:59 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.720 Running I/O for 2 seconds... 00:26:46.627 00:26:46.627 Latency(us) 00:26:46.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.627 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:46.627 nvme0n1 : 2.00 25639.22 100.15 0.00 0.00 4987.59 2569.70 14831.83 00:26:46.627 =================================================================================================================== 00:26:46.627 Total : 25639.22 100.15 0.00 0.00 4987.59 2569.70 14831.83 00:26:46.627 0 00:26:46.628 21:32:01 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:46.628 21:32:01 -- host/digest.sh@93 -- # get_accel_stats 00:26:46.628 21:32:01 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:46.628 21:32:01 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:46.628 | select(.opcode=="crc32c") 00:26:46.628 | "\(.module_name) \(.executed)"' 00:26:46.628 21:32:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:46.886 21:32:01 -- host/digest.sh@94 -- # false 00:26:46.886 21:32:01 -- host/digest.sh@94 -- # exp_module=software 00:26:46.886 21:32:01 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:46.886 21:32:01 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:46.886 21:32:01 -- host/digest.sh@98 -- # killprocess 1366099 00:26:46.886 21:32:01 -- common/autotest_common.sh@936 -- # '[' -z 1366099 ']' 00:26:46.886 21:32:01 -- common/autotest_common.sh@940 -- # kill -0 1366099 00:26:46.886 21:32:01 -- common/autotest_common.sh@941 -- # uname 00:26:46.886 21:32:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:46.886 21:32:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1366099 00:26:46.886 21:32:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:46.886 21:32:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:46.886 21:32:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1366099' 00:26:46.886 killing process with pid 1366099 00:26:46.886 21:32:01 -- common/autotest_common.sh@955 -- # kill 1366099 00:26:46.886 Received shutdown signal, test time was about 2.000000 seconds 00:26:46.886 00:26:46.886 Latency(us) 00:26:46.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.886 =================================================================================================================== 00:26:46.886 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:46.886 21:32:01 -- common/autotest_common.sh@960 -- # wait 1366099 00:26:47.145 21:32:02 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:47.145 21:32:02 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:47.145 21:32:02 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:47.145 21:32:02 -- host/digest.sh@80 -- # rw=randwrite 00:26:47.145 21:32:02 -- host/digest.sh@80 -- # bs=131072 00:26:47.145 21:32:02 -- host/digest.sh@80 -- # qd=16 00:26:47.145 21:32:02 -- host/digest.sh@80 -- # scan_dsa=false 00:26:47.145 21:32:02 -- host/digest.sh@83 -- # bperfpid=1366870 00:26:47.145 21:32:02 -- host/digest.sh@84 -- # waitforlisten 1366870 /var/tmp/bperf.sock 00:26:47.145 21:32:02 -- common/autotest_common.sh@817 -- # '[' -z 1366870 ']' 00:26:47.145 21:32:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:47.145 21:32:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:47.145 21:32:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:47.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:47.145 21:32:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:47.145 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:26:47.145 21:32:02 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:47.404 [2024-04-24 21:32:02.169818] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:26:47.404 [2024-04-24 21:32:02.169978] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366870 ] 00:26:47.404 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:47.404 Zero copy mechanism will not be used. 00:26:47.404 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.404 [2024-04-24 21:32:02.302133] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.662 [2024-04-24 21:32:02.393440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.921 21:32:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:47.921 21:32:02 -- common/autotest_common.sh@850 -- # return 0 00:26:47.921 21:32:02 -- host/digest.sh@86 -- # false 00:26:47.921 21:32:02 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:47.921 21:32:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:48.179 21:32:03 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.179 21:32:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.748 nvme0n1 00:26:48.748 21:32:03 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:48.748 21:32:03 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:48.748 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:48.748 Zero copy mechanism will not be used. 00:26:48.748 Running I/O for 2 seconds... 00:26:50.659 00:26:50.659 Latency(us) 00:26:50.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.659 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:50.659 nvme0n1 : 2.00 4861.20 607.65 0.00 0.00 3286.70 2190.28 6036.21 00:26:50.659 =================================================================================================================== 00:26:50.659 Total : 4861.20 607.65 0.00 0.00 3286.70 2190.28 6036.21 00:26:50.659 0 00:26:50.659 21:32:05 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:50.659 21:32:05 -- host/digest.sh@93 -- # get_accel_stats 00:26:50.659 21:32:05 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:50.659 21:32:05 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:50.659 | select(.opcode=="crc32c") 00:26:50.659 | "\(.module_name) \(.executed)"' 00:26:50.659 21:32:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:50.920 21:32:05 -- host/digest.sh@94 -- # false 00:26:50.920 21:32:05 -- host/digest.sh@94 -- # exp_module=software 00:26:50.920 21:32:05 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:50.920 21:32:05 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:50.920 21:32:05 -- host/digest.sh@98 -- # killprocess 1366870 00:26:50.920 21:32:05 -- common/autotest_common.sh@936 -- # '[' -z 1366870 ']' 00:26:50.920 21:32:05 -- common/autotest_common.sh@940 -- # kill -0 1366870 00:26:50.920 21:32:05 -- common/autotest_common.sh@941 -- # uname 00:26:50.920 21:32:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:50.920 21:32:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1366870 00:26:50.920 21:32:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:50.920 21:32:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:50.920 21:32:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1366870' 00:26:50.920 killing process with pid 1366870 00:26:50.920 21:32:05 -- common/autotest_common.sh@955 -- # kill 1366870 00:26:50.920 Received shutdown signal, test time was about 2.000000 seconds 00:26:50.920 00:26:50.920 Latency(us) 00:26:50.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.920 =================================================================================================================== 00:26:50.920 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:50.920 21:32:05 -- common/autotest_common.sh@960 -- # wait 1366870 00:26:51.181 21:32:06 -- host/digest.sh@132 -- # killprocess 1363074 00:26:51.181 21:32:06 -- common/autotest_common.sh@936 -- # '[' -z 1363074 ']' 00:26:51.181 21:32:06 -- common/autotest_common.sh@940 -- # kill -0 1363074 00:26:51.442 21:32:06 -- common/autotest_common.sh@941 -- # uname 00:26:51.442 21:32:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:51.442 21:32:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1363074 00:26:51.442 21:32:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:51.442 21:32:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:51.442 21:32:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1363074' 00:26:51.442 killing process with pid 1363074 00:26:51.442 21:32:06 -- common/autotest_common.sh@955 -- # kill 1363074 00:26:51.442 21:32:06 -- common/autotest_common.sh@960 -- # wait 1363074 00:26:53.979 00:26:53.979 real 0m26.266s 00:26:53.979 user 0m35.019s 00:26:53.979 sys 0m3.314s 00:26:53.979 21:32:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:53.979 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.979 ************************************ 00:26:53.979 END TEST nvmf_digest_dsa_target 00:26:53.979 ************************************ 00:26:53.979 21:32:08 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:53.979 21:32:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:53.979 21:32:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:53.979 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.979 ************************************ 00:26:53.979 START TEST nvmf_digest_error 00:26:53.979 ************************************ 00:26:53.980 21:32:08 -- common/autotest_common.sh@1111 -- # run_digest_error 00:26:53.980 21:32:08 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:53.980 21:32:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:53.980 21:32:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:53.980 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.980 21:32:08 -- nvmf/common.sh@470 -- # nvmfpid=1368238 00:26:53.980 21:32:08 -- nvmf/common.sh@471 -- # waitforlisten 1368238 00:26:53.980 21:32:08 -- common/autotest_common.sh@817 -- # '[' -z 1368238 ']' 00:26:53.980 21:32:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.980 21:32:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:53.980 21:32:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:53.980 21:32:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.980 21:32:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:53.980 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.980 [2024-04-24 21:32:08.771459] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:26:53.980 [2024-04-24 21:32:08.771556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.980 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.980 [2024-04-24 21:32:08.892195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.238 [2024-04-24 21:32:08.983398] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.239 [2024-04-24 21:32:08.983432] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.239 [2024-04-24 21:32:08.983441] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.239 [2024-04-24 21:32:08.983450] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.239 [2024-04-24 21:32:08.983457] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.239 [2024-04-24 21:32:08.983481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.497 21:32:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:54.497 21:32:09 -- common/autotest_common.sh@850 -- # return 0 00:26:54.497 21:32:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:54.497 21:32:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:54.497 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:54.757 21:32:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.757 21:32:09 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:54.757 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.757 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:54.757 [2024-04-24 21:32:09.491935] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:54.757 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.757 21:32:09 -- host/digest.sh@105 -- # common_target_config 00:26:54.757 21:32:09 -- host/digest.sh@43 -- # rpc_cmd 00:26:54.757 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.757 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:54.757 null0 00:26:54.757 [2024-04-24 21:32:09.646243] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.757 [2024-04-24 21:32:09.670384] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.757 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.757 21:32:09 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:54.757 21:32:09 -- host/digest.sh@54 -- # local rw bs qd 00:26:54.757 21:32:09 -- host/digest.sh@56 -- # rw=randread 00:26:54.757 21:32:09 -- host/digest.sh@56 -- # bs=4096 00:26:54.757 21:32:09 -- host/digest.sh@56 -- # qd=128 00:26:54.757 21:32:09 -- host/digest.sh@58 -- # bperfpid=1368287 00:26:54.757 21:32:09 -- host/digest.sh@60 -- # waitforlisten 1368287 /var/tmp/bperf.sock 00:26:54.757 21:32:09 -- common/autotest_common.sh@817 -- # '[' -z 1368287 ']' 00:26:54.757 21:32:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:54.757 21:32:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:54.757 21:32:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:54.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:54.757 21:32:09 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:54.757 21:32:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:54.757 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:26:55.018 [2024-04-24 21:32:09.747141] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:26:55.018 [2024-04-24 21:32:09.747253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1368287 ] 00:26:55.018 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.018 [2024-04-24 21:32:09.862484] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.018 [2024-04-24 21:32:09.952709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.586 21:32:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:55.587 21:32:10 -- common/autotest_common.sh@850 -- # return 0 00:26:55.587 21:32:10 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:55.587 21:32:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:55.845 21:32:10 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:55.845 21:32:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.845 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:26:55.845 21:32:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.845 21:32:10 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.845 21:32:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:56.104 nvme0n1 00:26:56.104 21:32:10 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:56.104 21:32:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.104 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:26:56.104 21:32:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.104 21:32:10 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:56.104 21:32:10 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:56.104 Running I/O for 2 seconds... 00:26:56.104 [2024-04-24 21:32:10.971785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.104 [2024-04-24 21:32:10.971836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.104 [2024-04-24 21:32:10.971851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.104 [2024-04-24 21:32:10.982869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.104 [2024-04-24 21:32:10.982902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.104 [2024-04-24 21:32:10.982914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.104 [2024-04-24 21:32:10.992309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.104 [2024-04-24 21:32:10.992337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.104 [2024-04-24 21:32:10.992353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.104 [2024-04-24 21:32:11.003597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.104 [2024-04-24 21:32:11.003624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.104 [2024-04-24 21:32:11.003634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.104 [2024-04-24 21:32:11.015162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.104 [2024-04-24 21:32:11.015190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.104 [2024-04-24 21:32:11.015200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.104 [2024-04-24 21:32:11.023716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.104 [2024-04-24 21:32:11.023744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.104 [2024-04-24 21:32:11.023754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.104 [2024-04-24 21:32:11.036310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.104 [2024-04-24 21:32:11.036339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.104 [2024-04-24 21:32:11.036349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.104 [2024-04-24 21:32:11.048232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.104 [2024-04-24 21:32:11.048262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.104 [2024-04-24 21:32:11.048277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.104 [2024-04-24 21:32:11.057103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.104 [2024-04-24 21:32:11.057129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.104 [2024-04-24 21:32:11.057139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.363 [2024-04-24 21:32:11.067465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.363 [2024-04-24 21:32:11.067493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.363 [2024-04-24 21:32:11.067503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.363 [2024-04-24 21:32:11.077830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.363 [2024-04-24 21:32:11.077857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.363 [2024-04-24 21:32:11.077866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.363 [2024-04-24 21:32:11.087041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.363 [2024-04-24 21:32:11.087075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.363 [2024-04-24 21:32:11.087085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.363 [2024-04-24 21:32:11.095425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.363 [2024-04-24 21:32:11.095458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.363 [2024-04-24 21:32:11.095470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.363 [2024-04-24 21:32:11.107033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.363 [2024-04-24 21:32:11.107068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.363 [2024-04-24 21:32:11.107078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.363 [2024-04-24 21:32:11.118746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.363 [2024-04-24 21:32:11.118776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.363 [2024-04-24 21:32:11.118786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.363 [2024-04-24 21:32:11.127663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.363 [2024-04-24 21:32:11.127689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.363 [2024-04-24 21:32:11.127699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.363 [2024-04-24 21:32:11.139679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.363 [2024-04-24 21:32:11.139707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.363 [2024-04-24 21:32:11.139716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.150810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.150835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.150844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.159633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.159659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.159670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.170596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.170622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.170637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.180305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.180331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.180341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.192812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.192837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.192847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.204390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.204416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.204426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.213331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.213357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.213367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.224993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.225019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.225028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.237423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.237450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.237459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.245714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.245738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.245748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.257246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.257280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.257291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.269494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.269526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.269536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.281353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.281378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.281388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.290757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.290782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.290793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.301938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.301965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.301975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.312948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.312981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.312995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.364 [2024-04-24 21:32:11.323947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.364 [2024-04-24 21:32:11.323976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.364 [2024-04-24 21:32:11.323986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.624 [2024-04-24 21:32:11.336067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.624 [2024-04-24 21:32:11.336094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.624 [2024-04-24 21:32:11.336103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.624 [2024-04-24 21:32:11.344613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.624 [2024-04-24 21:32:11.344638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.624 [2024-04-24 21:32:11.344648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.624 [2024-04-24 21:32:11.356037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.624 [2024-04-24 21:32:11.356066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.624 [2024-04-24 21:32:11.356081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.624 [2024-04-24 21:32:11.367080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.367106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.367115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.375302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.375326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.375335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.386815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.386839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.386848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.396050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.396078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.396089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.407206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.407231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.407241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.415900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.415924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.415934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.427183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.427207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.427216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.436802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.436827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.436837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.446263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.446296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.446306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.455724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.455750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.455760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.464260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.464291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.464300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.476971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.476996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.477005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.486338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.486367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.486378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.495808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.495833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.495843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.507649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.507674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.507683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.517062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.517086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.517096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.528771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.528798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.528815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.539947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.539972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.539981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.548776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.548800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.548809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.559970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.559996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.560006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.568500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.568524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.568534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.625 [2024-04-24 21:32:11.580375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.625 [2024-04-24 21:32:11.580399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.625 [2024-04-24 21:32:11.580408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.887 [2024-04-24 21:32:11.592103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.887 [2024-04-24 21:32:11.592130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.887 [2024-04-24 21:32:11.592140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.887 [2024-04-24 21:32:11.602351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.887 [2024-04-24 21:32:11.602377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.887 [2024-04-24 21:32:11.602387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.887 [2024-04-24 21:32:11.611005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.887 [2024-04-24 21:32:11.611030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.887 [2024-04-24 21:32:11.611040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.887 [2024-04-24 21:32:11.621214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.887 [2024-04-24 21:32:11.621244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.887 [2024-04-24 21:32:11.621254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.887 [2024-04-24 21:32:11.633890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.887 [2024-04-24 21:32:11.633917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.887 [2024-04-24 21:32:11.633927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.887 [2024-04-24 21:32:11.643810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.887 [2024-04-24 21:32:11.643834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.887 [2024-04-24 21:32:11.643844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.887 [2024-04-24 21:32:11.652379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.887 [2024-04-24 21:32:11.652403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.887 [2024-04-24 21:32:11.652413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.887 [2024-04-24 21:32:11.664310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.887 [2024-04-24 21:32:11.664334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.887 [2024-04-24 21:32:11.664343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.887 [2024-04-24 21:32:11.672648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.887 [2024-04-24 21:32:11.672678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.887 [2024-04-24 21:32:11.672689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.683766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.683792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.683802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.693327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.693353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.693363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.703817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.703842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.703852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.712848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.712873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.712882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.722285] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.722310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.722320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.733942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.733968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.733977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.744307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.744332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.744342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.752580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.752604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.752613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.762698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.762722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.762731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.773426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.773451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.773461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.784252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.784280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.784289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.794072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.794100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.794109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.802724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.802748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.802757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.812553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.812577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.812587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.822919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.822942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.822952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.831403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.831431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.831442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.888 [2024-04-24 21:32:11.841380] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:56.888 [2024-04-24 21:32:11.841404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:56.888 [2024-04-24 21:32:11.841414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.851684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.851709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.851719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.862508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.862533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.862543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.871135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.871166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.871178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.881219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.881245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.881255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.890377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.890403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.890413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.900947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.900974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.900983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.910964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.910988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.910998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.919695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.919722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.919733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.931131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.931156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.931166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.941219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.941244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.941253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.949712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.949737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.949747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.960680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.960711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.960721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.969576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.969600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.969610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.979841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.979865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.979882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.990623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.990648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.990658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:11.999720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:11.999744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:11.999753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:12.011928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:12.011956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:12.011968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:12.022023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:12.022048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:12.022057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:12.030759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:12.030785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.150 [2024-04-24 21:32:12.030795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.150 [2024-04-24 21:32:12.041336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.150 [2024-04-24 21:32:12.041362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.151 [2024-04-24 21:32:12.041372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.151 [2024-04-24 21:32:12.049865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.151 [2024-04-24 21:32:12.049891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.151 [2024-04-24 21:32:12.049902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.151 [2024-04-24 21:32:12.061164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.151 [2024-04-24 21:32:12.061189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.151 [2024-04-24 21:32:12.061198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.151 [2024-04-24 21:32:12.070394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.151 [2024-04-24 21:32:12.070419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.151 [2024-04-24 21:32:12.070429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.151 [2024-04-24 21:32:12.081446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.151 [2024-04-24 21:32:12.081472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.151 [2024-04-24 21:32:12.081482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.151 [2024-04-24 21:32:12.090784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.151 [2024-04-24 21:32:12.090809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.151 [2024-04-24 21:32:12.090819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.151 [2024-04-24 21:32:12.100041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.151 [2024-04-24 21:32:12.100067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.151 [2024-04-24 21:32:12.100077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.151 [2024-04-24 21:32:12.108987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.151 [2024-04-24 21:32:12.109014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.151 [2024-04-24 21:32:12.109024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.413 [2024-04-24 21:32:12.120352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.413 [2024-04-24 21:32:12.120378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.413 [2024-04-24 21:32:12.120387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.413 [2024-04-24 21:32:12.131440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.413 [2024-04-24 21:32:12.131478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.413 [2024-04-24 21:32:12.131489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.413 [2024-04-24 21:32:12.139911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.413 [2024-04-24 21:32:12.139938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.413 [2024-04-24 21:32:12.139949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.413 [2024-04-24 21:32:12.151843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.413 [2024-04-24 21:32:12.151867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.413 [2024-04-24 21:32:12.151876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.413 [2024-04-24 21:32:12.160695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.413 [2024-04-24 21:32:12.160719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.413 [2024-04-24 21:32:12.160729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.413 [2024-04-24 21:32:12.172426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.413 [2024-04-24 21:32:12.172452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.413 [2024-04-24 21:32:12.172462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.413 [2024-04-24 21:32:12.180568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.413 [2024-04-24 21:32:12.180594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.180604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.191996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.192019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.192029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.203126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.203151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.203161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.212360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.212388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.212398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.224694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.224721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.224732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.233824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.233848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.233859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.245128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.245153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.245163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.253492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.253519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.253528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.265871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.265897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.265908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.277693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.277722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.277732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.287710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.287737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.287749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.299921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.299945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.299954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.309327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.309358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.309368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.318102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.318127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.318137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.330298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.330323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.330332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.342495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.342519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.342528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.352617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.352641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.352650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.360579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.360604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.360613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-04-24 21:32:12.371718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.414 [2024-04-24 21:32:12.371742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.414 [2024-04-24 21:32:12.371752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.674 [2024-04-24 21:32:12.381318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.674 [2024-04-24 21:32:12.381344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.674 [2024-04-24 21:32:12.381355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.674 [2024-04-24 21:32:12.390965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.674 [2024-04-24 21:32:12.390991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.674 [2024-04-24 21:32:12.391000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.674 [2024-04-24 21:32:12.400035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.674 [2024-04-24 21:32:12.400063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.674 [2024-04-24 21:32:12.400075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.674 [2024-04-24 21:32:12.410283] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.674 [2024-04-24 21:32:12.410310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.674 [2024-04-24 21:32:12.410321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.674 [2024-04-24 21:32:12.422420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.674 [2024-04-24 21:32:12.422453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.674 [2024-04-24 21:32:12.422463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.674 [2024-04-24 21:32:12.430987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.674 [2024-04-24 21:32:12.431011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.674 [2024-04-24 21:32:12.431021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.442809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.442834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.442845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.451235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.451260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.451274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.464804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.464834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.464846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.477060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.477087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.477096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.486929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.486954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.486968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.494939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.494966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.494976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.506555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.506580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.506589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.518103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.518128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.518138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.526748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.526771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.526781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.537648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.537672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.537681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.548616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.548641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.548650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.558614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.558641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.558651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.566694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.566718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.566727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.576417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.576444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.576454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.587407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.587433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.587442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.598467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.598491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.598501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.607115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.607139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.607148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.619194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.619222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.619232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.675 [2024-04-24 21:32:12.630828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.675 [2024-04-24 21:32:12.630853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.675 [2024-04-24 21:32:12.630863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.639698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.639724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.639736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.651822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.651848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.651858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.660338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.660364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.660382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.671964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.671991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.672002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.680517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.680543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.680556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.691224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.691250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.691261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.702499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.702530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.702543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.713825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.713850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.713860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.722500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.722528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.722539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.735260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.735290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.735299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.743963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.743988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.743998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.756920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.756944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.756954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.767592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.767618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.767628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.777358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.934 [2024-04-24 21:32:12.777388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.934 [2024-04-24 21:32:12.777399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.934 [2024-04-24 21:32:12.786236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.935 [2024-04-24 21:32:12.786263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.935 [2024-04-24 21:32:12.786278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.935 [2024-04-24 21:32:12.795558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.935 [2024-04-24 21:32:12.795584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.935 [2024-04-24 21:32:12.795594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.935 [2024-04-24 21:32:12.805709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.935 [2024-04-24 21:32:12.805740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.935 [2024-04-24 21:32:12.805750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.935 [2024-04-24 21:32:12.814093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.935 [2024-04-24 21:32:12.814119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.935 [2024-04-24 21:32:12.814129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.935 [2024-04-24 21:32:12.825572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.935 [2024-04-24 21:32:12.825599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.935 [2024-04-24 21:32:12.825609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.935 [2024-04-24 21:32:12.840053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.935 [2024-04-24 21:32:12.840083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.935 [2024-04-24 21:32:12.840100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.935 [2024-04-24 21:32:12.849173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.935 [2024-04-24 21:32:12.849202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.935 [2024-04-24 21:32:12.849214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.935 [2024-04-24 21:32:12.863398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.935 [2024-04-24 21:32:12.863424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.935 [2024-04-24 21:32:12.863434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.935 [2024-04-24 21:32:12.872475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.935 [2024-04-24 21:32:12.872504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.935 [2024-04-24 21:32:12.872514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.935 [2024-04-24 21:32:12.882766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.935 [2024-04-24 21:32:12.882793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.935 [2024-04-24 21:32:12.882802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.935 [2024-04-24 21:32:12.891766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:57.935 [2024-04-24 21:32:12.891792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.935 [2024-04-24 21:32:12.891805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.193 [2024-04-24 21:32:12.901640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:58.193 [2024-04-24 21:32:12.901666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.193 [2024-04-24 21:32:12.901677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.193 [2024-04-24 21:32:12.911188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:58.193 [2024-04-24 21:32:12.911214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.193 [2024-04-24 21:32:12.911224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.194 [2024-04-24 21:32:12.919556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:58.194 [2024-04-24 21:32:12.919583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.194 [2024-04-24 21:32:12.919593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.194 [2024-04-24 21:32:12.932555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:58.194 [2024-04-24 21:32:12.932583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.194 [2024-04-24 21:32:12.932594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.194 [2024-04-24 21:32:12.943656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:58.194 [2024-04-24 21:32:12.943680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.194 [2024-04-24 21:32:12.943689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.194 [2024-04-24 21:32:12.952843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:58.194 [2024-04-24 21:32:12.952867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:58.194 [2024-04-24 21:32:12.952878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:58.194 00:26:58.194 Latency(us) 00:26:58.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.194 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:58.194 nvme0n1 : 2.00 24636.51 96.24 0.00 0.00 5190.47 2311.01 16142.55 00:26:58.194 =================================================================================================================== 00:26:58.194 Total : 24636.51 96.24 0.00 0.00 5190.47 2311.01 16142.55 00:26:58.194 0 00:26:58.194 21:32:12 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:58.194 21:32:12 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:58.194 21:32:12 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:58.194 | .driver_specific 00:26:58.194 | .nvme_error 00:26:58.194 | .status_code 00:26:58.194 | .command_transient_transport_error' 00:26:58.194 21:32:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:58.194 21:32:13 -- host/digest.sh@71 -- # (( 193 > 0 )) 00:26:58.194 21:32:13 -- host/digest.sh@73 -- # killprocess 1368287 00:26:58.194 21:32:13 -- common/autotest_common.sh@936 -- # '[' -z 1368287 ']' 00:26:58.194 21:32:13 -- common/autotest_common.sh@940 -- # kill -0 1368287 00:26:58.194 21:32:13 -- common/autotest_common.sh@941 -- # uname 00:26:58.194 21:32:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:58.194 21:32:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1368287 00:26:58.194 21:32:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:58.194 21:32:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:58.194 21:32:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1368287' 00:26:58.194 killing process with pid 1368287 00:26:58.194 21:32:13 -- common/autotest_common.sh@955 -- # kill 1368287 00:26:58.194 Received shutdown signal, test time was about 2.000000 seconds 00:26:58.194 00:26:58.194 Latency(us) 00:26:58.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.194 =================================================================================================================== 00:26:58.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.194 21:32:13 -- common/autotest_common.sh@960 -- # wait 1368287 00:26:58.762 21:32:13 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:58.762 21:32:13 -- host/digest.sh@54 -- # local rw bs qd 00:26:58.762 21:32:13 -- host/digest.sh@56 -- # rw=randread 00:26:58.762 21:32:13 -- host/digest.sh@56 -- # bs=131072 00:26:58.762 21:32:13 -- host/digest.sh@56 -- # qd=16 00:26:58.762 21:32:13 -- host/digest.sh@58 -- # bperfpid=1369173 00:26:58.762 21:32:13 -- host/digest.sh@60 -- # waitforlisten 1369173 /var/tmp/bperf.sock 00:26:58.762 21:32:13 -- common/autotest_common.sh@817 -- # '[' -z 1369173 ']' 00:26:58.762 21:32:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:58.762 21:32:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:58.762 21:32:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:58.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:58.762 21:32:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:58.762 21:32:13 -- common/autotest_common.sh@10 -- # set +x 00:26:58.762 21:32:13 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:58.762 [2024-04-24 21:32:13.581878] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:26:58.762 [2024-04-24 21:32:13.581991] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369173 ] 00:26:58.762 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.762 Zero copy mechanism will not be used. 00:26:58.762 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.762 [2024-04-24 21:32:13.692638] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.023 [2024-04-24 21:32:13.780769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.595 21:32:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:59.595 21:32:14 -- common/autotest_common.sh@850 -- # return 0 00:26:59.595 21:32:14 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:59.595 21:32:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:59.595 21:32:14 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:59.595 21:32:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.595 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:26:59.595 21:32:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.595 21:32:14 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.595 21:32:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.854 nvme0n1 00:26:59.854 21:32:14 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:59.854 21:32:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.854 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:26:59.854 21:32:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.854 21:32:14 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:59.854 21:32:14 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:59.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:59.854 Zero copy mechanism will not be used. 00:26:59.854 Running I/O for 2 seconds... 00:26:59.854 [2024-04-24 21:32:14.755140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:59.854 [2024-04-24 21:32:14.755199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-04-24 21:32:14.755214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.854 [2024-04-24 21:32:14.761753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:59.854 [2024-04-24 21:32:14.761788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-04-24 21:32:14.761806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.854 [2024-04-24 21:32:14.768139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:59.854 [2024-04-24 21:32:14.768166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-04-24 21:32:14.768176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.854 [2024-04-24 21:32:14.774480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:59.854 [2024-04-24 21:32:14.774507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-04-24 21:32:14.774517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.854 [2024-04-24 21:32:14.780794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:59.854 [2024-04-24 21:32:14.780818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-04-24 21:32:14.780828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.854 [2024-04-24 21:32:14.787264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:59.854 [2024-04-24 21:32:14.787293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-04-24 21:32:14.787302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:59.854 [2024-04-24 21:32:14.793695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:59.854 [2024-04-24 21:32:14.793718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-04-24 21:32:14.793728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:59.854 [2024-04-24 21:32:14.800248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:59.854 [2024-04-24 21:32:14.800276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-04-24 21:32:14.800286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:59.854 [2024-04-24 21:32:14.806637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:59.854 [2024-04-24 21:32:14.806660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-04-24 21:32:14.806669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:59.854 [2024-04-24 21:32:14.813030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:59.854 [2024-04-24 21:32:14.813053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:59.854 [2024-04-24 21:32:14.813062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.819971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.820001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.820010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.826801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.826825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.826834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.833694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.833718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.833728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.840190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.840221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.840231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.846481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.846505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.846515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.852906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.852929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.852938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.859350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.859374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.859383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.865613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.865638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.865648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.872041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.872067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.872081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.878426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.878450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.878459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.884682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.884707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.884717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.890942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.890965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.890974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.897320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.897343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.897353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.904777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.904801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.904812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.912746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.912769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.912779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.920691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.920716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.920726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.927860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.927883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.927893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.934126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.934153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.934162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.940380] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.940404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.940413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.114 [2024-04-24 21:32:14.946613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.114 [2024-04-24 21:32:14.946636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.114 [2024-04-24 21:32:14.946645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:14.952874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:14.952896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:14.952905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:14.959172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:14.959194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:14.959203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:14.965423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:14.965447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:14.965456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:14.971669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:14.971693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:14.971703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:14.977914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:14.977937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:14.977946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:14.984154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:14.984176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:14.984190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:14.990399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:14.990423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:14.990432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:14.996638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:14.996660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:14.996669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.002888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.002911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.002920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.009143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.009165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.009175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.015395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.015419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.015428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.021638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.021660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.021670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.027870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.027893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.027902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.034205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.034229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.034239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.040386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.040413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.040422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.045806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.045829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.045838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.051029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.051051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.051061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.056004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.056026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.056036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.061119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.061141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.061151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.066215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.066237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.066246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.070744] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.070772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.070781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.115 [2024-04-24 21:32:15.075916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.115 [2024-04-24 21:32:15.075940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.115 [2024-04-24 21:32:15.075950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.375 [2024-04-24 21:32:15.081046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.375 [2024-04-24 21:32:15.081071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.375 [2024-04-24 21:32:15.081086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.375 [2024-04-24 21:32:15.086100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.375 [2024-04-24 21:32:15.086124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.375 [2024-04-24 21:32:15.086134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.375 [2024-04-24 21:32:15.091127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.375 [2024-04-24 21:32:15.091150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.375 [2024-04-24 21:32:15.091160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.375 [2024-04-24 21:32:15.096169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.375 [2024-04-24 21:32:15.096192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.375 [2024-04-24 21:32:15.096202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.375 [2024-04-24 21:32:15.101244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.375 [2024-04-24 21:32:15.101274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.375 [2024-04-24 21:32:15.101285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.375 [2024-04-24 21:32:15.106747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.375 [2024-04-24 21:32:15.106771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.375 [2024-04-24 21:32:15.106781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.375 [2024-04-24 21:32:15.111642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.375 [2024-04-24 21:32:15.111669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.111679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.116856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.116880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.116890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.121902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.121926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.121936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.127042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.127070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.127079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.132573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.132597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.132607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.137657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.137685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.137698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.142888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.142913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.142924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.148070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.148094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.148103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.153142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.153165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.153175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.158203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.158227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.158237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.163361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.163388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.163398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.167740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.167765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.167779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.172138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.172163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.172173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.177263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.177291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.177300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.182220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.182245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.182254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.187293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.187317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.187327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.192319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.192345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.192355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.197313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.197339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.197349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.202318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.202342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.202351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.205068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.205090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.205100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.210078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.210107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.210116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.215116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.215138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.215147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.220279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.220303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.220312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.225519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.225541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.225551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.230658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.230680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.230689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.235509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.235533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.235543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.240615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.240640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.376 [2024-04-24 21:32:15.240649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.376 [2024-04-24 21:32:15.245666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.376 [2024-04-24 21:32:15.245689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.245698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.250735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.250760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.250769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.255921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.255945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.255955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.261231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.261255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.261263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.266436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.266460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.266470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.271561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.271586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.271596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.276538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.276563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.276573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.281447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.281472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.281482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.286279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.286303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.286313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.291156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.291180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.291190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.296302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.296335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.296346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.301439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.301465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.301475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.306651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.306676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.306687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.311732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.311757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.311774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.316930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.316956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.316966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.322221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.322247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.322257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.327386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.327410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.327420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.377 [2024-04-24 21:32:15.332896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.377 [2024-04-24 21:32:15.332920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.377 [2024-04-24 21:32:15.332929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.637 [2024-04-24 21:32:15.338493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.637 [2024-04-24 21:32:15.338518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.637 [2024-04-24 21:32:15.338528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.637 [2024-04-24 21:32:15.343859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.637 [2024-04-24 21:32:15.343883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.637 [2024-04-24 21:32:15.343893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.637 [2024-04-24 21:32:15.348908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.637 [2024-04-24 21:32:15.348933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.637 [2024-04-24 21:32:15.348942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.637 [2024-04-24 21:32:15.354135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.637 [2024-04-24 21:32:15.354162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.637 [2024-04-24 21:32:15.354172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.637 [2024-04-24 21:32:15.359625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.637 [2024-04-24 21:32:15.359650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.637 [2024-04-24 21:32:15.359663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.637 [2024-04-24 21:32:15.365552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.637 [2024-04-24 21:32:15.365577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.637 [2024-04-24 21:32:15.365587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.637 [2024-04-24 21:32:15.370609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.637 [2024-04-24 21:32:15.370633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.637 [2024-04-24 21:32:15.370643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.637 [2024-04-24 21:32:15.375590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.637 [2024-04-24 21:32:15.375613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.637 [2024-04-24 21:32:15.375623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.637 [2024-04-24 21:32:15.380709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.637 [2024-04-24 21:32:15.380733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.637 [2024-04-24 21:32:15.380743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.385748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.385772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.385789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.390822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.390846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.390856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.396055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.396081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.396091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.401211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.401234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.401244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.406300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.406324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.406333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.411297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.411320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.411330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.416439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.416463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.416472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.421546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.421570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.421579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.426725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.426751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.426761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.431918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.431945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.431955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.437028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.437053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.437062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.442315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.442339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.442350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.447412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.447437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.447447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.452590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.452614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.452624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.457720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.457746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.457756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.462970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.462996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.463006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.468055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.468078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.468088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.473263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.473294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.473308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.478441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.478469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.478479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.483682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.483707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.483718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.488948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.488973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.488983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.494229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.494254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.494264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.499278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.499302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.499312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.503576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.503601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.503611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.508642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.508670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.508680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.513853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.513879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.513889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.638 [2024-04-24 21:32:15.518955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.638 [2024-04-24 21:32:15.518980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.638 [2024-04-24 21:32:15.518990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.523955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.523980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.523990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.529179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.529207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.529218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.534373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.534405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.534415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.539494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.539520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.539530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.544607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.544633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.544643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.550066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.550091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.550101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.555250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.555278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.555288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.560413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.560437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.560451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.565420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.565445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.565455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.570680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.570703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.570714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.575954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.575979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.575989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.580401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.580427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.580436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.585447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.585471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.585481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.590552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.590577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.590587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.639 [2024-04-24 21:32:15.595803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.639 [2024-04-24 21:32:15.595827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.639 [2024-04-24 21:32:15.595837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.900 [2024-04-24 21:32:15.600936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.900 [2024-04-24 21:32:15.600962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.600971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.606167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.606192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.606201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.611364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.611390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.611400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.615894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.615919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.615928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.620953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.620978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.620988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.626063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.626087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.626097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.631004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.631027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.631037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.633864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.633888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.633899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.639401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.639425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.639435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.645666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.645692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.645707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.650092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.650118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.650127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.655182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.655206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.655216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.660379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.660403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.660413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.665567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.665592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.665601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.670563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.670589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.670599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.675732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.675761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.675770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.680953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.680978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.680988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.685996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.686020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.686029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.691033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.691062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.691072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.696276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.696300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.696309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.701414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.701438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.701447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.706468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.706491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.706501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.711620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.711644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.711653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.716923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.716949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.716959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.721674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.721698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.721708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.727029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.727053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.727062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.732124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.901 [2024-04-24 21:32:15.732149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.901 [2024-04-24 21:32:15.732163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.901 [2024-04-24 21:32:15.737299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.737323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.737332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.742813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.742837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.742846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.748322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.748345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.748355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.754011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.754035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.754045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.759187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.759211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.759220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.764474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.764504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.764515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.769286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.769314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.769324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.775024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.775051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.775061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.780492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.780521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.780531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.785733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.785757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.785767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.790844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.790868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.790877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.795867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.795892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.795902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.801023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.801048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.801057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.806136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.806161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.806171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.811158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.811183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.811193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.816202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.816229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.816238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.821384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.821409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.821423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.826559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.826584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.826593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.831828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.831851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.831861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.837224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.837249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.837258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.842830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.842856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.842866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.849604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.849629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.849638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.902 [2024-04-24 21:32:15.856388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:00.902 [2024-04-24 21:32:15.856413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.902 [2024-04-24 21:32:15.856422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.164 [2024-04-24 21:32:15.862434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.164 [2024-04-24 21:32:15.862459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.164 [2024-04-24 21:32:15.862468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.164 [2024-04-24 21:32:15.868310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.164 [2024-04-24 21:32:15.868334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.164 [2024-04-24 21:32:15.868344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.164 [2024-04-24 21:32:15.873986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.164 [2024-04-24 21:32:15.874023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.164 [2024-04-24 21:32:15.874035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.164 [2024-04-24 21:32:15.879874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.164 [2024-04-24 21:32:15.879901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.164 [2024-04-24 21:32:15.879913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.164 [2024-04-24 21:32:15.884898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.164 [2024-04-24 21:32:15.884926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.884936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.889874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.889899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.889908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.894900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.894924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.894934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.899800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.899825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.899834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.904865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.904892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.904901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.909995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.910021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.910030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.915199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.915224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.915233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.920363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.920388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.920398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.925562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.925587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.925597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.930658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.930684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.930693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.935309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.935332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.935344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.940645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.940670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.940680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.945789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.945816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.945826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.950842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.950868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.950878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.956015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.956041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.956050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.961371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.961399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.961409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.966519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.966544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.966554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.971803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.971831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.971842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.977099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.977127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.977149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.982446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.982475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.982488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.987718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.987742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.987752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.992624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.992648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.992658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:15.997308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:15.997331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:15.997341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:16.002118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:16.002143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:16.002152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:16.006827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:16.006852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:16.006862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:16.011563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:16.011589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:16.011598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:16.016493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.165 [2024-04-24 21:32:16.016519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.165 [2024-04-24 21:32:16.016529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.165 [2024-04-24 21:32:16.021190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.021214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.021224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.023876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.023900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.023911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.028639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.028665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.028675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.033494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.033519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.033528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.038402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.038425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.038434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.042574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.042609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.042619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.047468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.047496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.047507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.053236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.053261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.053277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.058030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.058054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.058064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.064604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.064630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.064639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.071212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.071240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.071250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.077416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.077444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.077454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.083010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.083035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.083046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.087847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.087872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.087883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.092548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.092575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.092587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.097961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.097988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.097998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.103594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.103623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.103633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.109192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.109217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.109227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.114893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.114918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.114927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.166 [2024-04-24 21:32:16.120894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.166 [2024-04-24 21:32:16.120919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.166 [2024-04-24 21:32:16.120929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.429 [2024-04-24 21:32:16.126996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.429 [2024-04-24 21:32:16.127024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.429 [2024-04-24 21:32:16.127035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.429 [2024-04-24 21:32:16.132949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.429 [2024-04-24 21:32:16.132973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.429 [2024-04-24 21:32:16.132983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.429 [2024-04-24 21:32:16.136206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.429 [2024-04-24 21:32:16.136235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.429 [2024-04-24 21:32:16.136245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.429 [2024-04-24 21:32:16.141520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.429 [2024-04-24 21:32:16.141544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.429 [2024-04-24 21:32:16.141554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.429 [2024-04-24 21:32:16.147176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.429 [2024-04-24 21:32:16.147202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.429 [2024-04-24 21:32:16.147212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.429 [2024-04-24 21:32:16.152846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.429 [2024-04-24 21:32:16.152872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.429 [2024-04-24 21:32:16.152882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.429 [2024-04-24 21:32:16.158431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.429 [2024-04-24 21:32:16.158457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.429 [2024-04-24 21:32:16.158467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.429 [2024-04-24 21:32:16.163335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.429 [2024-04-24 21:32:16.163361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.429 [2024-04-24 21:32:16.163370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.429 [2024-04-24 21:32:16.168850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.429 [2024-04-24 21:32:16.168877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.429 [2024-04-24 21:32:16.168887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.429 [2024-04-24 21:32:16.173104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.429 [2024-04-24 21:32:16.173129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.429 [2024-04-24 21:32:16.173139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.429 [2024-04-24 21:32:16.177979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.429 [2024-04-24 21:32:16.178006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.429 [2024-04-24 21:32:16.178016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.429 [2024-04-24 21:32:16.182714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.429 [2024-04-24 21:32:16.182741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.429 [2024-04-24 21:32:16.182751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.429 [2024-04-24 21:32:16.187477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.187503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.187513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.192091] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.192116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.192126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.196910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.196945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.196955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.201782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.201808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.201818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.206746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.206771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.206780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.211211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.211235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.211244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.215973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.215997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.216007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.220941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.220967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.220981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.225612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.225638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.225647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.230321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.230346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.230355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.235170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.235196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.235205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.240120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.240146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.240156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.244813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.244839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.244848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.249691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.249714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.249724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.254446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.254471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.254481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.259231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.259256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.259265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.264059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.264084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.264094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.268804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.268829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.268839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.273782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.273809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.273818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.278691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.278716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.278725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.283466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.283490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.283500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.288006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.288030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.288039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.292574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.292598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.292607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.297416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.297442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.297451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.302309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.302335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.302348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.307224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.307251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.430 [2024-04-24 21:32:16.307264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.430 [2024-04-24 21:32:16.312068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.430 [2024-04-24 21:32:16.312095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.312104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.317108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.317134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.317145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.322176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.322202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.322212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.327082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.327106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.327116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.332087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.332112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.332122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.336956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.336980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.336990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.341965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.341989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.341999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.346668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.346697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.346708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.351706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.351733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.351742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.356848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.356873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.356883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.361729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.361755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.361764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.366606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.366631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.366640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.371455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.371480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.371490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.376082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.376107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.376116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.381021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.381046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.381056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.431 [2024-04-24 21:32:16.386132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.431 [2024-04-24 21:32:16.386157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.431 [2024-04-24 21:32:16.386171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.391448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.391474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.391484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.396211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.396235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.396245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.400890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.400916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.400925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.405980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.406006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.406016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.411436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.411461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.411471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.417245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.417275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.417285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.422630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.422656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.422666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.427872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.427897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.427906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.431556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.431581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.431590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.436748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.436772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.436782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.441278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.441301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.441311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.446517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.446542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.446551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.452067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.452091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.452101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.457832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.457855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.457864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.463076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.463103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.463112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.468389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.468419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.468430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.473753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.473780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.473795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.478468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.478494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.478504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.483454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.483478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.694 [2024-04-24 21:32:16.483488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.694 [2024-04-24 21:32:16.488094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.694 [2024-04-24 21:32:16.488119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.488128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.493319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.493344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.493353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.498443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.498472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.498481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.503162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.503188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.503199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.507752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.507778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.507789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.512418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.512443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.512454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.517279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.517308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.517319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.521787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.521812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.521823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.526480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.526507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.526518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.531210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.531236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.531246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.536144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.536168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.536179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.540873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.540898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.540908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.545805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.545829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.545839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.550559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.550586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.550596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.555319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.555344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.555358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.560108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.560132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.560142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.564986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.565011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.565020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.569841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.569868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.569878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.574738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.574765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.574776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.579543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.579568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.579578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.584293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.584316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.584326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.586733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.586757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.586767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.591453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.591476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.591485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.596151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.596178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.596188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.600883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.600909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.600918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.605244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.605283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.605295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.610114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.610139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.610156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.615144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.615169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.615179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.619855] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.619880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.619890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.624816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.624840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.624850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.629603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.629626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.629636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.634306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.634330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.634339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.638752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.638778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.638787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.643417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.643443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.643453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.648320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.648343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.648353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.695 [2024-04-24 21:32:16.653046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.695 [2024-04-24 21:32:16.653072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.695 [2024-04-24 21:32:16.653083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.955 [2024-04-24 21:32:16.657897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.955 [2024-04-24 21:32:16.657922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.955 [2024-04-24 21:32:16.657931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.955 [2024-04-24 21:32:16.662796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.955 [2024-04-24 21:32:16.662820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.955 [2024-04-24 21:32:16.662830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.955 [2024-04-24 21:32:16.667507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.955 [2024-04-24 21:32:16.667531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.955 [2024-04-24 21:32:16.667541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.955 [2024-04-24 21:32:16.672574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.955 [2024-04-24 21:32:16.672603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.955 [2024-04-24 21:32:16.672614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.955 [2024-04-24 21:32:16.677852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.955 [2024-04-24 21:32:16.677887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.955 [2024-04-24 21:32:16.677898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.955 [2024-04-24 21:32:16.683498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.955 [2024-04-24 21:32:16.683522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.955 [2024-04-24 21:32:16.683532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.955 [2024-04-24 21:32:16.689219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.955 [2024-04-24 21:32:16.689244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.955 [2024-04-24 21:32:16.689254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.955 [2024-04-24 21:32:16.695064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.955 [2024-04-24 21:32:16.695090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.955 [2024-04-24 21:32:16.695099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.955 [2024-04-24 21:32:16.700410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.955 [2024-04-24 21:32:16.700436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.955 [2024-04-24 21:32:16.700446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.955 [2024-04-24 21:32:16.705733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.955 [2024-04-24 21:32:16.705763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.955 [2024-04-24 21:32:16.705774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.955 [2024-04-24 21:32:16.711066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.955 [2024-04-24 21:32:16.711092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.955 [2024-04-24 21:32:16.711104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.955 [2024-04-24 21:32:16.716533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.955 [2024-04-24 21:32:16.716560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.955 [2024-04-24 21:32:16.716570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.955 [2024-04-24 21:32:16.722037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.955 [2024-04-24 21:32:16.722062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.955 [2024-04-24 21:32:16.722072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.956 [2024-04-24 21:32:16.727424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.956 [2024-04-24 21:32:16.727447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.956 [2024-04-24 21:32:16.727457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.956 [2024-04-24 21:32:16.732475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.956 [2024-04-24 21:32:16.732500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.956 [2024-04-24 21:32:16.732510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.956 [2024-04-24 21:32:16.737726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:01.956 [2024-04-24 21:32:16.737752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.956 [2024-04-24 21:32:16.737761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.956 00:27:01.956 Latency(us) 00:27:01.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.956 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:01.956 nvme0n1 : 2.00 5901.13 737.64 0.00 0.00 2709.00 506.61 12486.33 00:27:01.956 =================================================================================================================== 00:27:01.956 Total : 5901.13 737.64 0.00 0.00 2709.00 506.61 12486.33 00:27:01.956 0 00:27:01.956 21:32:16 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:01.956 21:32:16 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:01.956 21:32:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:01.956 21:32:16 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:01.956 | .driver_specific 00:27:01.956 | .nvme_error 00:27:01.956 | .status_code 00:27:01.956 | .command_transient_transport_error' 00:27:01.956 21:32:16 -- host/digest.sh@71 -- # (( 380 > 0 )) 00:27:01.956 21:32:16 -- host/digest.sh@73 -- # killprocess 1369173 00:27:01.956 21:32:16 -- common/autotest_common.sh@936 -- # '[' -z 1369173 ']' 00:27:01.956 21:32:16 -- common/autotest_common.sh@940 -- # kill -0 1369173 00:27:01.956 21:32:16 -- common/autotest_common.sh@941 -- # uname 00:27:01.956 21:32:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:01.956 21:32:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1369173 00:27:02.214 21:32:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:02.214 21:32:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:02.214 21:32:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1369173' 00:27:02.214 killing process with pid 1369173 00:27:02.214 21:32:16 -- common/autotest_common.sh@955 -- # kill 1369173 00:27:02.214 Received shutdown signal, test time was about 2.000000 seconds 00:27:02.214 00:27:02.214 Latency(us) 00:27:02.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.214 =================================================================================================================== 00:27:02.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:02.214 21:32:16 -- common/autotest_common.sh@960 -- # wait 1369173 00:27:02.473 21:32:17 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:02.473 21:32:17 -- host/digest.sh@54 -- # local rw bs qd 00:27:02.473 21:32:17 -- host/digest.sh@56 -- # rw=randwrite 00:27:02.473 21:32:17 -- host/digest.sh@56 -- # bs=4096 00:27:02.473 21:32:17 -- host/digest.sh@56 -- # qd=128 00:27:02.473 21:32:17 -- host/digest.sh@58 -- # bperfpid=1369791 00:27:02.473 21:32:17 -- host/digest.sh@60 -- # waitforlisten 1369791 /var/tmp/bperf.sock 00:27:02.473 21:32:17 -- common/autotest_common.sh@817 -- # '[' -z 1369791 ']' 00:27:02.473 21:32:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:02.473 21:32:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:02.473 21:32:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:02.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:02.473 21:32:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:02.473 21:32:17 -- common/autotest_common.sh@10 -- # set +x 00:27:02.473 21:32:17 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:02.473 [2024-04-24 21:32:17.360882] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:27:02.473 [2024-04-24 21:32:17.360997] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369791 ] 00:27:02.473 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.730 [2024-04-24 21:32:17.470891] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.730 [2024-04-24 21:32:17.559130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.300 21:32:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:03.300 21:32:18 -- common/autotest_common.sh@850 -- # return 0 00:27:03.300 21:32:18 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.300 21:32:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.300 21:32:18 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:03.300 21:32:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.300 21:32:18 -- common/autotest_common.sh@10 -- # set +x 00:27:03.300 21:32:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.300 21:32:18 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.300 21:32:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.870 nvme0n1 00:27:03.870 21:32:18 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:03.870 21:32:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.870 21:32:18 -- common/autotest_common.sh@10 -- # set +x 00:27:03.870 21:32:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.870 21:32:18 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:03.870 21:32:18 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:03.870 Running I/O for 2 seconds... 00:27:03.870 [2024-04-24 21:32:18.683697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.870 [2024-04-24 21:32:18.683870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.870 [2024-04-24 21:32:18.683911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.870 [2024-04-24 21:32:18.693423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.870 [2024-04-24 21:32:18.693572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.870 [2024-04-24 21:32:18.693602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.870 [2024-04-24 21:32:18.703107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.870 [2024-04-24 21:32:18.703257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.870 [2024-04-24 21:32:18.703287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.870 [2024-04-24 21:32:18.712783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.870 [2024-04-24 21:32:18.712929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.870 [2024-04-24 21:32:18.712953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.870 [2024-04-24 21:32:18.722459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.870 [2024-04-24 21:32:18.722605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.870 [2024-04-24 21:32:18.722630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.870 [2024-04-24 21:32:18.732114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.870 [2024-04-24 21:32:18.732260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.870 [2024-04-24 21:32:18.732290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.870 [2024-04-24 21:32:18.741764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.870 [2024-04-24 21:32:18.741907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.870 [2024-04-24 21:32:18.741930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.870 [2024-04-24 21:32:18.751384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.870 [2024-04-24 21:32:18.751526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.870 [2024-04-24 21:32:18.751550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.870 [2024-04-24 21:32:18.761170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.870 [2024-04-24 21:32:18.761314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.870 [2024-04-24 21:32:18.761338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.870 [2024-04-24 21:32:18.770788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.870 [2024-04-24 21:32:18.770927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.870 [2024-04-24 21:32:18.770950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.870 [2024-04-24 21:32:18.780400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.870 [2024-04-24 21:32:18.780541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.871 [2024-04-24 21:32:18.780569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.871 [2024-04-24 21:32:18.790008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.871 [2024-04-24 21:32:18.790151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.871 [2024-04-24 21:32:18.790173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.871 [2024-04-24 21:32:18.799610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.871 [2024-04-24 21:32:18.799751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.871 [2024-04-24 21:32:18.799773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.871 [2024-04-24 21:32:18.809202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.871 [2024-04-24 21:32:18.809347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.871 [2024-04-24 21:32:18.809368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.871 [2024-04-24 21:32:18.818801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.871 [2024-04-24 21:32:18.818943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.871 [2024-04-24 21:32:18.818965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.871 [2024-04-24 21:32:18.828390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:03.871 [2024-04-24 21:32:18.828530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.871 [2024-04-24 21:32:18.828552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.837985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.129 [2024-04-24 21:32:18.838129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.129 [2024-04-24 21:32:18.838150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.847563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.129 [2024-04-24 21:32:18.847704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.129 [2024-04-24 21:32:18.847733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.857175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.129 [2024-04-24 21:32:18.857316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.129 [2024-04-24 21:32:18.857338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.866739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.129 [2024-04-24 21:32:18.866879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.129 [2024-04-24 21:32:18.866903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.876349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.129 [2024-04-24 21:32:18.876490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.129 [2024-04-24 21:32:18.876514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.885908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.129 [2024-04-24 21:32:18.886051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.129 [2024-04-24 21:32:18.886077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.895506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.129 [2024-04-24 21:32:18.895648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.129 [2024-04-24 21:32:18.895671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.905074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.129 [2024-04-24 21:32:18.905213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.129 [2024-04-24 21:32:18.905237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.914687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.129 [2024-04-24 21:32:18.914828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.129 [2024-04-24 21:32:18.914850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.924275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.129 [2024-04-24 21:32:18.924417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.129 [2024-04-24 21:32:18.924439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.933850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.129 [2024-04-24 21:32:18.933993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.129 [2024-04-24 21:32:18.934014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.943435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.129 [2024-04-24 21:32:18.943575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.129 [2024-04-24 21:32:18.943600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.953034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.129 [2024-04-24 21:32:18.953176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.129 [2024-04-24 21:32:18.953200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.129 [2024-04-24 21:32:18.962626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:18.962770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:18.962793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.130 [2024-04-24 21:32:18.972214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:18.972360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:18.972383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.130 [2024-04-24 21:32:18.981797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:18.981936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:18.981959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.130 [2024-04-24 21:32:18.991395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:18.991539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:18.991561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.130 [2024-04-24 21:32:19.000980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:19.001121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:19.001142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.130 [2024-04-24 21:32:19.010588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:19.010731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:19.010754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.130 [2024-04-24 21:32:19.020186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:19.020331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:19.020353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.130 [2024-04-24 21:32:19.029773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:19.029919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:19.029941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.130 [2024-04-24 21:32:19.039369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:19.039510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:19.039531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.130 [2024-04-24 21:32:19.048948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:19.049089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:19.049111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.130 [2024-04-24 21:32:19.058523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:19.058665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:19.058689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.130 [2024-04-24 21:32:19.068168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:19.068313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:19.068336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.130 [2024-04-24 21:32:19.077757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:19.077902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:19.077929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.130 [2024-04-24 21:32:19.087350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.130 [2024-04-24 21:32:19.087492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.130 [2024-04-24 21:32:19.087515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.388 [2024-04-24 21:32:19.096931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.388 [2024-04-24 21:32:19.097076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.388 [2024-04-24 21:32:19.097097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.388 [2024-04-24 21:32:19.106519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.388 [2024-04-24 21:32:19.106659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.388 [2024-04-24 21:32:19.106681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.388 [2024-04-24 21:32:19.116106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.388 [2024-04-24 21:32:19.116247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.388 [2024-04-24 21:32:19.116273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.388 [2024-04-24 21:32:19.125696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.388 [2024-04-24 21:32:19.125837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.388 [2024-04-24 21:32:19.125860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.388 [2024-04-24 21:32:19.135300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.135444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.135467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.144899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.145042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.145064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.154478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.154618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.154640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.164090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.164230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.164257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.173678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.173819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.173841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.183264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.183410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.183433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.192860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.193008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.193030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.204314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.204485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.204511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.214722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.214862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.214884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.224334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.224475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.224496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.233933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.234073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.234093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.243538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.243678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.243700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.253120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.253258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.253285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.262710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.262852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.262879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.272346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.272488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.272511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.281931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.282071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.282096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.291555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.291695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.291719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.301152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.301294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.301318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.310771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.310912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.310937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.320573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.320722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.320747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.330300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.330447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.330471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.339891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.340033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.340058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.389 [2024-04-24 21:32:19.349506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.389 [2024-04-24 21:32:19.349647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.389 [2024-04-24 21:32:19.349674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.359106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.359249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.359280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.368724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.368865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.368887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.378321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.378461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.378483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.387955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.388099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.388120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.397564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.397706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.397728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.407178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.407324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.407345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.416778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.416918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.416939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.426371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.426512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.426533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.435970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.436113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.436132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.445573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.445716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.445737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.455175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.455318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.455340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.464773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.464912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.464935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.474391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.474532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.474555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.483980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.484121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.484142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.493586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.493726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.493748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.503198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.503345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.503368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.512806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.512947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.512969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.522409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.522553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.522583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.532022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.532163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.532185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.541637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.541780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.541801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.648 [2024-04-24 21:32:19.551241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.648 [2024-04-24 21:32:19.551386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.648 [2024-04-24 21:32:19.551408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.649 [2024-04-24 21:32:19.560844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.649 [2024-04-24 21:32:19.560985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.649 [2024-04-24 21:32:19.561007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.649 [2024-04-24 21:32:19.570451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.649 [2024-04-24 21:32:19.570591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.649 [2024-04-24 21:32:19.570612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.649 [2024-04-24 21:32:19.580056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.649 [2024-04-24 21:32:19.580195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.649 [2024-04-24 21:32:19.580218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.649 [2024-04-24 21:32:19.589658] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.649 [2024-04-24 21:32:19.589799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.649 [2024-04-24 21:32:19.589822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.649 [2024-04-24 21:32:19.599285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.649 [2024-04-24 21:32:19.599424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.649 [2024-04-24 21:32:19.599447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.649 [2024-04-24 21:32:19.608955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.649 [2024-04-24 21:32:19.609106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.649 [2024-04-24 21:32:19.609127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.909 [2024-04-24 21:32:19.618580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.909 [2024-04-24 21:32:19.618722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.909 [2024-04-24 21:32:19.618742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.909 [2024-04-24 21:32:19.628197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.909 [2024-04-24 21:32:19.628344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.909 [2024-04-24 21:32:19.628366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.909 [2024-04-24 21:32:19.637803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.909 [2024-04-24 21:32:19.637942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.909 [2024-04-24 21:32:19.637962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.909 [2024-04-24 21:32:19.647395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.909 [2024-04-24 21:32:19.647535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.909 [2024-04-24 21:32:19.647555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.909 [2024-04-24 21:32:19.656980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.909 [2024-04-24 21:32:19.657121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.909 [2024-04-24 21:32:19.657142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.909 [2024-04-24 21:32:19.666573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.909 [2024-04-24 21:32:19.666715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.909 [2024-04-24 21:32:19.666736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.909 [2024-04-24 21:32:19.676158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.909 [2024-04-24 21:32:19.676298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.909 [2024-04-24 21:32:19.676325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.909 [2024-04-24 21:32:19.685737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.909 [2024-04-24 21:32:19.685882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.909 [2024-04-24 21:32:19.685907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.909 [2024-04-24 21:32:19.695303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.909 [2024-04-24 21:32:19.695442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.909 [2024-04-24 21:32:19.695469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.909 [2024-04-24 21:32:19.704913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.909 [2024-04-24 21:32:19.705054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.909 [2024-04-24 21:32:19.705076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.909 [2024-04-24 21:32:19.714483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.909 [2024-04-24 21:32:19.714623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.909 [2024-04-24 21:32:19.714647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.909 [2024-04-24 21:32:19.724087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.909 [2024-04-24 21:32:19.724226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.909 [2024-04-24 21:32:19.724249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.733671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.733811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.733835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.743291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.743432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.743455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.752848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.752988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.753013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.762464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.762617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.762639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.772156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.772303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.772325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.781756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.781899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.781921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.791342] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.791481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.791503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.800927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.801070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.801092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.810513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.810638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.810661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.820107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.820235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.820258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.829697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.829822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.829845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.839292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.839419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.839441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.848879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.849003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.849025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.858465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.858591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.858613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:04.910 [2024-04-24 21:32:19.868048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:04.910 [2024-04-24 21:32:19.868175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.910 [2024-04-24 21:32:19.868198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.171 [2024-04-24 21:32:19.877646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.171 [2024-04-24 21:32:19.877774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.171 [2024-04-24 21:32:19.877798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.171 [2024-04-24 21:32:19.887223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.171 [2024-04-24 21:32:19.887354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.171 [2024-04-24 21:32:19.887378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.171 [2024-04-24 21:32:19.896811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.171 [2024-04-24 21:32:19.896936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.171 [2024-04-24 21:32:19.896959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.171 [2024-04-24 21:32:19.906398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.171 [2024-04-24 21:32:19.906524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.171 [2024-04-24 21:32:19.906547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.171 [2024-04-24 21:32:19.915970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.171 [2024-04-24 21:32:19.916096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.171 [2024-04-24 21:32:19.916119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.171 [2024-04-24 21:32:19.925554] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.171 [2024-04-24 21:32:19.925680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.171 [2024-04-24 21:32:19.925702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.171 [2024-04-24 21:32:19.935134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:19.935260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:19.935292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:19.944730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:19.944858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:19.944881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:19.954309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:19.954434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:19.954459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:19.963885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:19.964015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:19.964037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:19.973478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:19.973604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:19.973627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:19.983061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:19.983186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:19.983208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:19.992653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:19.992780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:19.992803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:20.003421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:20.003597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:20.003628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:20.016014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:20.016177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:20.016204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:20.027544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:20.027712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:20.027752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:20.039091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:20.039221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:20.039247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:20.049149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:20.049283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:20.049307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:20.058897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:20.059023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:20.059047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:20.068606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:20.068742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:20.068766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:20.080663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:20.080801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:20.080826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:20.090539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:20.090674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:20.090699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:20.100554] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:20.100684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:20.100707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:20.110240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:20.110371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:20.110400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:20.119919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:20.120045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:20.120069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.172 [2024-04-24 21:32:20.129595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.172 [2024-04-24 21:32:20.129723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.172 [2024-04-24 21:32:20.129747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.139278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.139414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.139438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.150064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.150213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.150235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.159829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.159958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.159981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.169515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.169640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.169663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.179168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.179306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.179328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.188797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.188924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.188946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.198435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.198572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.198596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.208056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.208182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.208204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.217678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.217805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.217828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.227296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.227421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.227445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.236881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.237008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.237030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.246507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.246636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.246659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.256116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.256240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.256264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.482 [2024-04-24 21:32:20.265725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.482 [2024-04-24 21:32:20.265848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.482 [2024-04-24 21:32:20.265869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.275317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.275442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.275466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.285132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.285272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.285299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.295475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.295604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.295632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.305088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.305215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.305238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.314683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.314807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.314830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.324525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.324667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.324695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.334116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.334251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.334277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.343698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.343824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.343848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.353309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.353438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.353462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.362891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.363024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.363046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.372520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.372646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.372670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.382125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.382249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.382277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.391716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.391842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.391864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.401340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.401468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.401492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.410956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.411083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.411106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.420561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.420686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.420709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.483 [2024-04-24 21:32:20.430130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.483 [2024-04-24 21:32:20.430264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.483 [2024-04-24 21:32:20.430293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.767 [2024-04-24 21:32:20.439775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.767 [2024-04-24 21:32:20.439904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.767 [2024-04-24 21:32:20.439926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.767 [2024-04-24 21:32:20.449378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.767 [2024-04-24 21:32:20.449505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.767 [2024-04-24 21:32:20.449527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.767 [2024-04-24 21:32:20.458982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.767 [2024-04-24 21:32:20.459108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.767 [2024-04-24 21:32:20.459130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.767 [2024-04-24 21:32:20.468591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.767 [2024-04-24 21:32:20.468717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.767 [2024-04-24 21:32:20.468740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.767 [2024-04-24 21:32:20.478187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.767 [2024-04-24 21:32:20.478317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.767 [2024-04-24 21:32:20.478341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.767 [2024-04-24 21:32:20.487772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.767 [2024-04-24 21:32:20.487897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.767 [2024-04-24 21:32:20.487920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.767 [2024-04-24 21:32:20.497356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.767 [2024-04-24 21:32:20.497484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.767 [2024-04-24 21:32:20.497507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.767 [2024-04-24 21:32:20.506953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.767 [2024-04-24 21:32:20.507075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.767 [2024-04-24 21:32:20.507098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.767 [2024-04-24 21:32:20.516537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.767 [2024-04-24 21:32:20.516661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.767 [2024-04-24 21:32:20.516684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.767 [2024-04-24 21:32:20.526141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.526281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.526306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.535730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.535855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.535877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.545315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.545438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.545461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.554896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.555020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.555042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.564502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.564629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.564653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.574076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.574200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.574222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.583691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.583816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.583840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.593256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.593389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.593411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.602863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.602988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.603011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.612476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.612602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.612624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.622077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.622201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.622224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.631697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.631821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.631843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.641343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.641467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.641489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.650967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.651092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.651114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.660607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.660733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.660756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 [2024-04-24 21:32:20.670196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:27:05.768 [2024-04-24 21:32:20.670327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.768 [2024-04-24 21:32:20.670349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.768 00:27:05.768 Latency(us) 00:27:05.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.768 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:05.768 nvme0n1 : 2.00 26227.79 102.45 0.00 0.00 4871.80 4052.88 12417.35 00:27:05.768 =================================================================================================================== 00:27:05.768 Total : 26227.79 102.45 0.00 0.00 4871.80 4052.88 12417.35 00:27:05.768 0 00:27:05.768 21:32:20 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:05.768 21:32:20 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:05.768 21:32:20 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:05.768 | .driver_specific 00:27:05.768 | .nvme_error 00:27:05.768 | .status_code 00:27:05.768 | .command_transient_transport_error' 00:27:05.768 21:32:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:06.025 21:32:20 -- host/digest.sh@71 -- # (( 206 > 0 )) 00:27:06.025 21:32:20 -- host/digest.sh@73 -- # killprocess 1369791 00:27:06.025 21:32:20 -- common/autotest_common.sh@936 -- # '[' -z 1369791 ']' 00:27:06.025 21:32:20 -- common/autotest_common.sh@940 -- # kill -0 1369791 00:27:06.025 21:32:20 -- common/autotest_common.sh@941 -- # uname 00:27:06.026 21:32:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:06.026 21:32:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1369791 00:27:06.026 21:32:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:06.026 21:32:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:06.026 21:32:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1369791' 00:27:06.026 killing process with pid 1369791 00:27:06.026 21:32:20 -- common/autotest_common.sh@955 -- # kill 1369791 00:27:06.026 Received shutdown signal, test time was about 2.000000 seconds 00:27:06.026 00:27:06.026 Latency(us) 00:27:06.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.026 =================================================================================================================== 00:27:06.026 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.026 21:32:20 -- common/autotest_common.sh@960 -- # wait 1369791 00:27:06.590 21:32:21 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:06.590 21:32:21 -- host/digest.sh@54 -- # local rw bs qd 00:27:06.590 21:32:21 -- host/digest.sh@56 -- # rw=randwrite 00:27:06.590 21:32:21 -- host/digest.sh@56 -- # bs=131072 00:27:06.590 21:32:21 -- host/digest.sh@56 -- # qd=16 00:27:06.590 21:32:21 -- host/digest.sh@58 -- # bperfpid=1370688 00:27:06.590 21:32:21 -- host/digest.sh@60 -- # waitforlisten 1370688 /var/tmp/bperf.sock 00:27:06.590 21:32:21 -- common/autotest_common.sh@817 -- # '[' -z 1370688 ']' 00:27:06.590 21:32:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:06.590 21:32:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:06.590 21:32:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:06.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:06.590 21:32:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:06.590 21:32:21 -- common/autotest_common.sh@10 -- # set +x 00:27:06.590 21:32:21 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:06.591 [2024-04-24 21:32:21.320362] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:27:06.591 [2024-04-24 21:32:21.320475] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1370688 ] 00:27:06.591 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:06.591 Zero copy mechanism will not be used. 00:27:06.591 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.591 [2024-04-24 21:32:21.431992] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.591 [2024-04-24 21:32:21.520337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.159 21:32:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:07.159 21:32:22 -- common/autotest_common.sh@850 -- # return 0 00:27:07.159 21:32:22 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.159 21:32:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.417 21:32:22 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:07.417 21:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.417 21:32:22 -- common/autotest_common.sh@10 -- # set +x 00:27:07.417 21:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.417 21:32:22 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.417 21:32:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.417 nvme0n1 00:27:07.676 21:32:22 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:07.676 21:32:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.676 21:32:22 -- common/autotest_common.sh@10 -- # set +x 00:27:07.676 21:32:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.676 21:32:22 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:07.676 21:32:22 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:07.676 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:07.676 Zero copy mechanism will not be used. 00:27:07.676 Running I/O for 2 seconds... 00:27:07.676 [2024-04-24 21:32:22.469195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.676 [2024-04-24 21:32:22.469468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.676 [2024-04-24 21:32:22.469508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.676 [2024-04-24 21:32:22.475290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.676 [2024-04-24 21:32:22.475541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.676 [2024-04-24 21:32:22.475574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.676 [2024-04-24 21:32:22.481526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.676 [2024-04-24 21:32:22.481768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.676 [2024-04-24 21:32:22.481798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.676 [2024-04-24 21:32:22.487303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.676 [2024-04-24 21:32:22.487541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.676 [2024-04-24 21:32:22.487569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.676 [2024-04-24 21:32:22.493494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.676 [2024-04-24 21:32:22.493726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.676 [2024-04-24 21:32:22.493753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.676 [2024-04-24 21:32:22.498529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.676 [2024-04-24 21:32:22.498765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.676 [2024-04-24 21:32:22.498791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.676 [2024-04-24 21:32:22.505326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.676 [2024-04-24 21:32:22.505572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.676 [2024-04-24 21:32:22.505601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.676 [2024-04-24 21:32:22.510971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.676 [2024-04-24 21:32:22.511214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.676 [2024-04-24 21:32:22.511241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.676 [2024-04-24 21:32:22.517860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.676 [2024-04-24 21:32:22.518092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.518118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.525057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.525297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.525324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.532506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.532746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.532772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.539582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.539820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.539844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.546533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.546759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.546785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.552591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.552822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.552846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.558474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.558700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.558725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.563762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.563983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.564009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.568656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.568880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.568906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.573607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.573831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.573856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.580364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.580587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.580610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.585261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.585498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.585522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.590065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.590303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.590330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.595162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.595392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.595417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.601937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.602171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.602196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.606885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.607121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.607144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.611676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.611893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.611917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.616237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.616468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.616494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.621436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.621658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.621682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.628000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.628219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.628244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.677 [2024-04-24 21:32:22.635442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.677 [2024-04-24 21:32:22.635674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.677 [2024-04-24 21:32:22.635700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.937 [2024-04-24 21:32:22.641003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.937 [2024-04-24 21:32:22.641232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.937 [2024-04-24 21:32:22.641262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.937 [2024-04-24 21:32:22.645518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.645748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.645773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.650318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.650535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.650561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.654824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.654940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.654964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.659854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.660078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.660105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.666101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.666334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.666358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.672150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.672386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.672409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.677708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.677930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.677957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.683329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.683563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.683589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.690261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.690494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.690519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.696016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.696238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.696262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.701545] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.701778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.701801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.707211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.707444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.707469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.712714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.712944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.712969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.718155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.718386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.718411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.723240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.723475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.723499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.728500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.728716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.728739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.733361] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.733580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.733605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.738924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.739142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.739166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.744644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.744864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.744888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.749986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.750208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.750233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.754703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.754920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.754943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.759459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.759688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.759714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.764542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.764769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.764795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.769532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.938 [2024-04-24 21:32:22.769761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.938 [2024-04-24 21:32:22.769785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.938 [2024-04-24 21:32:22.774422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.774648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.774677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.780978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.781202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.781228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.785612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.785835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.785860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.790368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.790594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.790624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.794933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.795156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.795181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.799752] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.799978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.800003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.804425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.804652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.804677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.809860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.810085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.810111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.815132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.815363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.815387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.819849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.820068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.820094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.824475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.824694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.824717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.830662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.830881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.830904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.836979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.837207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.837231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.843933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.844153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.844179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.851213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.851449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.851476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.858629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.858846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.858871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.863694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.863920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.863943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.868247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.868472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.868498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.872728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.872953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.872978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.879450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.879678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.879704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.884094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.884321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.884351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.888785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.889011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.889037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.893421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.893647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.893671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.939 [2024-04-24 21:32:22.898071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:07.939 [2024-04-24 21:32:22.898291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.939 [2024-04-24 21:32:22.898314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.903971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.904202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.904226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.909299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.909526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.909549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.913845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.914063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.914089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.918519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.918737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.918760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.923090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.923312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.923335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.927735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.927954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.927979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.932465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.932686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.932710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.937107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.937330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.937354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.941743] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.941961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.941984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.946497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.946727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.946750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.951313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.951530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.951553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.955789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.956007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.956032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.960451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.960670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.960693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.965051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.965270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.965299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.969604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.969819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.969844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.975044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.975279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.975303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.979645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.979861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.979884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.986126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.199 [2024-04-24 21:32:22.986362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.199 [2024-04-24 21:32:22.986388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.199 [2024-04-24 21:32:22.990912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:22.991134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:22.991158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:22.995664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:22.995890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:22.995915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.000373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.000591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.000616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.005041] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.005274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.005298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.009764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.009987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.010010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.014355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.014575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.014600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.018998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.019217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.019240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.023682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.023900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.023923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.028403] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.028629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.028652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.033119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.033349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.033374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.037745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.037970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.037993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.042456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.042675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.042700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.047193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.047425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.047452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.051759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.051980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.052003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.056519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.056747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.056771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.060943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.061010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.061033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.065622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.065842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.065866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.070116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.070338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.070362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.074735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.074955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.074981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.079473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.079694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.079717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.084205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.084429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.084453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.088925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.089158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.089180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.093707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.093934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.093959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.098511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.098741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.098764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.103343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.103564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.103587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.107866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.108093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.108116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.112493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.200 [2024-04-24 21:32:23.112711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.200 [2024-04-24 21:32:23.112733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.200 [2024-04-24 21:32:23.117117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.201 [2024-04-24 21:32:23.117350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.201 [2024-04-24 21:32:23.117373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.201 [2024-04-24 21:32:23.122841] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.201 [2024-04-24 21:32:23.123069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.201 [2024-04-24 21:32:23.123097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.201 [2024-04-24 21:32:23.128451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.201 [2024-04-24 21:32:23.128674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.201 [2024-04-24 21:32:23.128700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.201 [2024-04-24 21:32:23.133782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.201 [2024-04-24 21:32:23.134015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.201 [2024-04-24 21:32:23.134040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.201 [2024-04-24 21:32:23.138381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.201 [2024-04-24 21:32:23.138608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.201 [2024-04-24 21:32:23.138633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.201 [2024-04-24 21:32:23.143294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.201 [2024-04-24 21:32:23.143512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.201 [2024-04-24 21:32:23.143538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.201 [2024-04-24 21:32:23.148579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.201 [2024-04-24 21:32:23.148799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.201 [2024-04-24 21:32:23.148824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.201 [2024-04-24 21:32:23.155638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.201 [2024-04-24 21:32:23.155865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.201 [2024-04-24 21:32:23.155890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.459 [2024-04-24 21:32:23.161788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.459 [2024-04-24 21:32:23.162005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.459 [2024-04-24 21:32:23.162029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.459 [2024-04-24 21:32:23.169721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.459 [2024-04-24 21:32:23.169941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.459 [2024-04-24 21:32:23.169969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.459 [2024-04-24 21:32:23.175381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.459 [2024-04-24 21:32:23.175602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.459 [2024-04-24 21:32:23.175628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.459 [2024-04-24 21:32:23.179937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.459 [2024-04-24 21:32:23.180160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.459 [2024-04-24 21:32:23.180184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.459 [2024-04-24 21:32:23.184859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.459 [2024-04-24 21:32:23.185078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.459 [2024-04-24 21:32:23.185101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.459 [2024-04-24 21:32:23.189789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.459 [2024-04-24 21:32:23.190006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.459 [2024-04-24 21:32:23.190029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.459 [2024-04-24 21:32:23.194458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.459 [2024-04-24 21:32:23.194686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.459 [2024-04-24 21:32:23.194709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.459 [2024-04-24 21:32:23.199188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.459 [2024-04-24 21:32:23.199418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.459 [2024-04-24 21:32:23.199440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.459 [2024-04-24 21:32:23.205481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.459 [2024-04-24 21:32:23.205711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.459 [2024-04-24 21:32:23.205734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.459 [2024-04-24 21:32:23.210449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.459 [2024-04-24 21:32:23.210671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.459 [2024-04-24 21:32:23.210697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.459 [2024-04-24 21:32:23.215153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.459 [2024-04-24 21:32:23.215376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.459 [2024-04-24 21:32:23.215401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.459 [2024-04-24 21:32:23.219804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.459 [2024-04-24 21:32:23.220024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.459 [2024-04-24 21:32:23.220048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.224445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.224675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.224698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.229266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.229503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.229527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.233932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.234153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.234178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.239228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.239457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.239480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.245242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.245465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.245488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.251554] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.251783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.251808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.258650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.258872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.258897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.266633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.266858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.266882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.274580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.274822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.274850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.281985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.282215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.282240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.290229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.290457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.290481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.298373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.298606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.298635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.305178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.305416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.305440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.310665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.310893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.310919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.316154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.316391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.316415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.321618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.321848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.321874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.328584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.328807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.328832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.334934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.335168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.335192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.342784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.343004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.343027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.348968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.349198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.349229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.353835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.354065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.354090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.358585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.358813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.358837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.364197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.364422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.364449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.369799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.370022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.460 [2024-04-24 21:32:23.370048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.460 [2024-04-24 21:32:23.374497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.460 [2024-04-24 21:32:23.374715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.461 [2024-04-24 21:32:23.374741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.461 [2024-04-24 21:32:23.379099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.461 [2024-04-24 21:32:23.379321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.461 [2024-04-24 21:32:23.379350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.461 [2024-04-24 21:32:23.383688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.461 [2024-04-24 21:32:23.383919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.461 [2024-04-24 21:32:23.383943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.461 [2024-04-24 21:32:23.388597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.461 [2024-04-24 21:32:23.388815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.461 [2024-04-24 21:32:23.388839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.461 [2024-04-24 21:32:23.394808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.461 [2024-04-24 21:32:23.395036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.461 [2024-04-24 21:32:23.395060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.461 [2024-04-24 21:32:23.399404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.461 [2024-04-24 21:32:23.399622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.461 [2024-04-24 21:32:23.399646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.461 [2024-04-24 21:32:23.403967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.461 [2024-04-24 21:32:23.404185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.461 [2024-04-24 21:32:23.404210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.461 [2024-04-24 21:32:23.408321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.461 [2024-04-24 21:32:23.408381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.461 [2024-04-24 21:32:23.408404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.461 [2024-04-24 21:32:23.413028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.461 [2024-04-24 21:32:23.413259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.461 [2024-04-24 21:32:23.413294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.461 [2024-04-24 21:32:23.418178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.461 [2024-04-24 21:32:23.418414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.461 [2024-04-24 21:32:23.418438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.720 [2024-04-24 21:32:23.423203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.720 [2024-04-24 21:32:23.423427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.720 [2024-04-24 21:32:23.423451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.720 [2024-04-24 21:32:23.428045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.720 [2024-04-24 21:32:23.428263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.720 [2024-04-24 21:32:23.428290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.720 [2024-04-24 21:32:23.434329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.720 [2024-04-24 21:32:23.434553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.720 [2024-04-24 21:32:23.434577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.720 [2024-04-24 21:32:23.438974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.720 [2024-04-24 21:32:23.439196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.720 [2024-04-24 21:32:23.439220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.720 [2024-04-24 21:32:23.443717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.720 [2024-04-24 21:32:23.443945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.720 [2024-04-24 21:32:23.443973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.720 [2024-04-24 21:32:23.448299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.720 [2024-04-24 21:32:23.448521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.720 [2024-04-24 21:32:23.448546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.720 [2024-04-24 21:32:23.453052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.720 [2024-04-24 21:32:23.453293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.720 [2024-04-24 21:32:23.453319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.720 [2024-04-24 21:32:23.457406] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.720 [2024-04-24 21:32:23.457468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.720 [2024-04-24 21:32:23.457496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.720 [2024-04-24 21:32:23.462197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.720 [2024-04-24 21:32:23.462419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.720 [2024-04-24 21:32:23.462452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.720 [2024-04-24 21:32:23.466709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.720 [2024-04-24 21:32:23.466935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.720 [2024-04-24 21:32:23.466960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.720 [2024-04-24 21:32:23.471290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.720 [2024-04-24 21:32:23.471514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.471537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.475874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.476092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.476120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.480571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.480790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.480816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.485277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.485495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.485519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.489928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.490146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.490171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.494633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.494861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.494886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.499212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.499435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.499459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.503859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.504082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.504108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.508187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.508248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.508276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.512892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.513108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.513134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.517453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.517678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.517701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.521819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.521880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.521905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.526393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.526614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.526641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.531042] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.531276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.531301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.535656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.535883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.535907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.540315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.540531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.540558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.544992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.545209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.545232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.549500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.549735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.549760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.554185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.554419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.554444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.558733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.558952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.558975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.563219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.563449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.563472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.567791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.568017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.568041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.572291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.572518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.572546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.576958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.577181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.577206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.581588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.581822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.581846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.721 [2024-04-24 21:32:23.586387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.721 [2024-04-24 21:32:23.586615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.721 [2024-04-24 21:32:23.586640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.591005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.591233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.591259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.595691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.595921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.595946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.600378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.600600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.600624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.605014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.605243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.605271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.609502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.609723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.609745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.613909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.614139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.614165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.618534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.618760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.618788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.623439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.623668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.623693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.628496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.628727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.628750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.633043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.633261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.633288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.637663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.637882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.637908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.642258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.642479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.642501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.646909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.647127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.647150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.651401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.651628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.651650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.655700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.655839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.655861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.660063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.660288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.660309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.664556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.664784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.664807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.669061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.669282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.669304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.673447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.673679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.673703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.722 [2024-04-24 21:32:23.677857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.722 [2024-04-24 21:32:23.678074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.722 [2024-04-24 21:32:23.678097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.682775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.682992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.683017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.687831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.688060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.688083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.693057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.693278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.693302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.699530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.699759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.699782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.706324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.706554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.706579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.713142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.713387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.713414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.718586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.718809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.718832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.723535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.723762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.723786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.728275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.728507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.728536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.733119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.733358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.733383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.738780] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.739010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.739034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.744445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.744665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.744690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.749781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.750012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.750039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.755190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.755431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.755456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.761512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.761739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.982 [2024-04-24 21:32:23.761763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.982 [2024-04-24 21:32:23.767718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.982 [2024-04-24 21:32:23.767946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.767971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.773153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.773383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.773408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.779175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.779461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.779490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.785353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.785516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.785542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.792105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.792343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.792367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.797660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.797887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.797911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.802587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.802819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.802843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.807664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.807886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.807910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.813628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.813847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.813871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.818866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.818935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.818960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.826032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.826259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.826296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.832674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.832902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.832927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.840914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.841141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.841165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.846829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.847051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.847075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.852546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.852773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.852801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.858724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.858951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.858975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.865080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.865305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.865330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.871598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.871830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.871856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.877497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.877717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.877741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.883592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.883812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.883836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.889558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.889791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.889819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.895043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.895265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.895296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.900446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.900666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.900689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.905394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.905629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.905654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.911869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.912102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.912128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.917630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.917858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.983 [2024-04-24 21:32:23.917883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.983 [2024-04-24 21:32:23.924524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.983 [2024-04-24 21:32:23.924753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.984 [2024-04-24 21:32:23.924778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.984 [2024-04-24 21:32:23.931010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.984 [2024-04-24 21:32:23.931228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.984 [2024-04-24 21:32:23.931253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.984 [2024-04-24 21:32:23.938720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:08.984 [2024-04-24 21:32:23.938945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.984 [2024-04-24 21:32:23.938970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:23.944871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:23.945105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:23.945130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:23.949701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:23.949919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:23.949944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:23.954777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:23.955030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:23.955060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:23.959938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:23.960161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:23.960187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:23.965024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:23.965259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:23.965290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:23.971038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:23.971276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:23.971300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:23.975590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:23.975809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:23.975833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:23.980277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:23.980498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:23.980521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:23.985023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:23.985248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:23.985285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:23.989701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:23.989916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:23.989940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:23.994198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:23.994428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:23.994453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:23.998682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:23.998900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:23.998923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:24.003491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:24.003716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:24.003739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:24.007996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:24.008212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:24.008238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:24.012603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:24.012821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:24.012846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:24.017175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:24.017417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:24.017442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:24.021816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:24.022045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:24.022067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:24.026483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:24.026713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:24.026735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.244 [2024-04-24 21:32:24.031264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.244 [2024-04-24 21:32:24.031498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.244 [2024-04-24 21:32:24.031522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.036618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.036845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.036874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.042742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.042970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.042994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.049200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.049445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.049470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.055540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.055759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.055783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.060789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.061009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.061034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.066552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.066783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.066807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.072115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.072358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.072386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.076736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.076960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.076987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.081385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.081605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.081628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.086104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.086342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.086367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.091053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.091289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.091314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.096566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.096796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.096820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.102619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.102858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.102885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.107374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.107525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.107551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.112256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.112495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.112518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.117013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.117234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.117259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.122280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.122503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.122528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.128483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.128705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.128734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.132989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.133210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.133234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.137809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.138038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.138062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.142676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.142897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.142921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.147300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.147521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.147545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.152108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.152332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.152354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.156828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.157046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.157074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.161524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.161748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.161772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.166082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.166308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.245 [2024-04-24 21:32:24.166333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.245 [2024-04-24 21:32:24.170724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.245 [2024-04-24 21:32:24.170947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.246 [2024-04-24 21:32:24.170970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.246 [2024-04-24 21:32:24.175236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.246 [2024-04-24 21:32:24.175462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.246 [2024-04-24 21:32:24.175486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.246 [2024-04-24 21:32:24.179659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.246 [2024-04-24 21:32:24.179898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.246 [2024-04-24 21:32:24.179926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.246 [2024-04-24 21:32:24.184307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.246 [2024-04-24 21:32:24.184536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.246 [2024-04-24 21:32:24.184561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.246 [2024-04-24 21:32:24.188996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.246 [2024-04-24 21:32:24.189223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.246 [2024-04-24 21:32:24.189249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.246 [2024-04-24 21:32:24.193629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.246 [2024-04-24 21:32:24.193850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.246 [2024-04-24 21:32:24.193873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.246 [2024-04-24 21:32:24.198108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.246 [2024-04-24 21:32:24.198339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.246 [2024-04-24 21:32:24.198365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.246 [2024-04-24 21:32:24.202665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.246 [2024-04-24 21:32:24.202884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.246 [2024-04-24 21:32:24.202908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.508 [2024-04-24 21:32:24.207295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.508 [2024-04-24 21:32:24.207518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.508 [2024-04-24 21:32:24.207548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.508 [2024-04-24 21:32:24.211882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.508 [2024-04-24 21:32:24.212102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.508 [2024-04-24 21:32:24.212126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.508 [2024-04-24 21:32:24.216502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.508 [2024-04-24 21:32:24.216732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.508 [2024-04-24 21:32:24.216757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.508 [2024-04-24 21:32:24.221160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.508 [2024-04-24 21:32:24.221385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.508 [2024-04-24 21:32:24.221408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.508 [2024-04-24 21:32:24.225727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.508 [2024-04-24 21:32:24.225948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.508 [2024-04-24 21:32:24.225971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.508 [2024-04-24 21:32:24.230624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.508 [2024-04-24 21:32:24.230852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.508 [2024-04-24 21:32:24.230876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.508 [2024-04-24 21:32:24.235397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.508 [2024-04-24 21:32:24.235625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.508 [2024-04-24 21:32:24.235648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.508 [2024-04-24 21:32:24.240832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.508 [2024-04-24 21:32:24.241054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.508 [2024-04-24 21:32:24.241076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.508 [2024-04-24 21:32:24.245795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.508 [2024-04-24 21:32:24.246012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.508 [2024-04-24 21:32:24.246035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.508 [2024-04-24 21:32:24.251535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.508 [2024-04-24 21:32:24.251771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.508 [2024-04-24 21:32:24.251794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.508 [2024-04-24 21:32:24.258275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.258503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.258528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.264274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.264493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.264516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.268915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.269134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.269165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.275483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.275702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.275730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.280061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.280299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.280325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.284641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.284869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.284895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.289234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.289453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.289479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.293773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.294000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.294026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.298305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.298370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.298395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.302986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.303065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.303087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.308095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.308324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.308348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.312995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.313065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.313087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.317941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.318222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.318246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.324578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.324800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.324823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.329316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.329542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.329569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.333934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.334164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.334191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.338958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.339028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.339053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.344812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.345043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.345070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.349522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.349739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.349765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.355638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.355873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.355898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.360462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.360680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.360703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.365214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.365447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.365471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.369860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.370079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.370102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.374499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.374719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.374742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.379342] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.379562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.379586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.384823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.385045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.385069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.509 [2024-04-24 21:32:24.389889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.509 [2024-04-24 21:32:24.390115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.509 [2024-04-24 21:32:24.390139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.510 [2024-04-24 21:32:24.394936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.510 [2024-04-24 21:32:24.395155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.510 [2024-04-24 21:32:24.395179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.510 [2024-04-24 21:32:24.400809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.510 [2024-04-24 21:32:24.401029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.510 [2024-04-24 21:32:24.401053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.510 [2024-04-24 21:32:24.405379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.510 [2024-04-24 21:32:24.405609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.510 [2024-04-24 21:32:24.405635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.510 [2024-04-24 21:32:24.410576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.510 [2024-04-24 21:32:24.410792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.510 [2024-04-24 21:32:24.410818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.510 [2024-04-24 21:32:24.415896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.510 [2024-04-24 21:32:24.416125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.510 [2024-04-24 21:32:24.416149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.510 [2024-04-24 21:32:24.420754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.510 [2024-04-24 21:32:24.420975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.510 [2024-04-24 21:32:24.420998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.510 [2024-04-24 21:32:24.425911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.510 [2024-04-24 21:32:24.426128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.510 [2024-04-24 21:32:24.426160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.510 [2024-04-24 21:32:24.431856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.510 [2024-04-24 21:32:24.432090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.510 [2024-04-24 21:32:24.432113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.510 [2024-04-24 21:32:24.438376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.510 [2024-04-24 21:32:24.438597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.510 [2024-04-24 21:32:24.438621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.510 [2024-04-24 21:32:24.446407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.510 [2024-04-24 21:32:24.446627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.510 [2024-04-24 21:32:24.446651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.510 [2024-04-24 21:32:24.452170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.510 [2024-04-24 21:32:24.452408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.510 [2024-04-24 21:32:24.452436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.510 [2024-04-24 21:32:24.457128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:09.510 [2024-04-24 21:32:24.457235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.510 [2024-04-24 21:32:24.457261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.510 00:27:09.510 Latency(us) 00:27:09.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.510 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:09.510 nvme0n1 : 2.00 5870.63 733.83 0.00 0.00 2721.16 2009.20 10347.79 00:27:09.510 =================================================================================================================== 00:27:09.510 Total : 5870.63 733.83 0.00 0.00 2721.16 2009.20 10347.79 00:27:09.510 0 00:27:09.771 21:32:24 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:09.771 21:32:24 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:09.771 21:32:24 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:09.771 | .driver_specific 00:27:09.771 | .nvme_error 00:27:09.771 | .status_code 00:27:09.771 | .command_transient_transport_error' 00:27:09.771 21:32:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:09.771 21:32:24 -- host/digest.sh@71 -- # (( 379 > 0 )) 00:27:09.771 21:32:24 -- host/digest.sh@73 -- # killprocess 1370688 00:27:09.771 21:32:24 -- common/autotest_common.sh@936 -- # '[' -z 1370688 ']' 00:27:09.771 21:32:24 -- common/autotest_common.sh@940 -- # kill -0 1370688 00:27:09.771 21:32:24 -- common/autotest_common.sh@941 -- # uname 00:27:09.771 21:32:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:09.771 21:32:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1370688 00:27:09.771 21:32:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:09.771 21:32:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:09.771 21:32:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1370688' 00:27:09.771 killing process with pid 1370688 00:27:09.771 21:32:24 -- common/autotest_common.sh@955 -- # kill 1370688 00:27:09.771 Received shutdown signal, test time was about 2.000000 seconds 00:27:09.771 00:27:09.771 Latency(us) 00:27:09.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.771 =================================================================================================================== 00:27:09.771 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.771 21:32:24 -- common/autotest_common.sh@960 -- # wait 1370688 00:27:10.337 21:32:25 -- host/digest.sh@116 -- # killprocess 1368238 00:27:10.337 21:32:25 -- common/autotest_common.sh@936 -- # '[' -z 1368238 ']' 00:27:10.337 21:32:25 -- common/autotest_common.sh@940 -- # kill -0 1368238 00:27:10.337 21:32:25 -- common/autotest_common.sh@941 -- # uname 00:27:10.337 21:32:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:10.337 21:32:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1368238 00:27:10.337 21:32:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:10.337 21:32:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:10.337 21:32:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1368238' 00:27:10.337 killing process with pid 1368238 00:27:10.337 21:32:25 -- common/autotest_common.sh@955 -- # kill 1368238 00:27:10.337 21:32:25 -- common/autotest_common.sh@960 -- # wait 1368238 00:27:10.595 00:27:10.595 real 0m16.814s 00:27:10.595 user 0m32.398s 00:27:10.595 sys 0m3.237s 00:27:10.595 21:32:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:10.595 21:32:25 -- common/autotest_common.sh@10 -- # set +x 00:27:10.595 ************************************ 00:27:10.595 END TEST nvmf_digest_error 00:27:10.595 ************************************ 00:27:10.595 21:32:25 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:10.595 21:32:25 -- host/digest.sh@150 -- # nvmftestfini 00:27:10.595 21:32:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:10.595 21:32:25 -- nvmf/common.sh@117 -- # sync 00:27:10.595 21:32:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:10.595 21:32:25 -- nvmf/common.sh@120 -- # set +e 00:27:10.595 21:32:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:10.595 21:32:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:10.595 rmmod nvme_tcp 00:27:10.853 rmmod nvme_fabrics 00:27:10.853 rmmod nvme_keyring 00:27:10.853 21:32:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:10.853 21:32:25 -- nvmf/common.sh@124 -- # set -e 00:27:10.853 21:32:25 -- nvmf/common.sh@125 -- # return 0 00:27:10.853 21:32:25 -- nvmf/common.sh@478 -- # '[' -n 1368238 ']' 00:27:10.853 21:32:25 -- nvmf/common.sh@479 -- # killprocess 1368238 00:27:10.853 21:32:25 -- common/autotest_common.sh@936 -- # '[' -z 1368238 ']' 00:27:10.853 21:32:25 -- common/autotest_common.sh@940 -- # kill -0 1368238 00:27:10.853 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1368238) - No such process 00:27:10.853 21:32:25 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1368238 is not found' 00:27:10.853 Process with pid 1368238 is not found 00:27:10.853 21:32:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:10.853 21:32:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:10.853 21:32:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:10.853 21:32:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.853 21:32:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:10.853 21:32:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.853 21:32:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.853 21:32:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.760 21:32:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:12.760 00:27:12.760 real 1m43.693s 00:27:12.760 user 2m21.836s 00:27:12.760 sys 0m14.511s 00:27:12.760 21:32:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:12.760 21:32:27 -- common/autotest_common.sh@10 -- # set +x 00:27:12.760 ************************************ 00:27:12.760 END TEST nvmf_digest 00:27:12.760 ************************************ 00:27:12.760 21:32:27 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:27:12.760 21:32:27 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:27:12.760 21:32:27 -- nvmf/nvmf.sh@118 -- # [[ phy-fallback == phy ]] 00:27:12.760 21:32:27 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:27:12.760 21:32:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:12.760 21:32:27 -- common/autotest_common.sh@10 -- # set +x 00:27:12.760 21:32:27 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:27:12.760 00:27:12.760 real 15m53.485s 00:27:12.760 user 31m59.546s 00:27:12.760 sys 4m26.282s 00:27:12.760 21:32:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:12.760 21:32:27 -- common/autotest_common.sh@10 -- # set +x 00:27:12.760 ************************************ 00:27:12.760 END TEST nvmf_tcp 00:27:12.760 ************************************ 00:27:13.020 21:32:27 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:27:13.020 21:32:27 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:13.020 21:32:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:13.020 21:32:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:13.020 21:32:27 -- common/autotest_common.sh@10 -- # set +x 00:27:13.020 ************************************ 00:27:13.020 START TEST spdkcli_nvmf_tcp 00:27:13.020 ************************************ 00:27:13.020 21:32:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:13.020 * Looking for test storage... 00:27:13.020 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:27:13.020 21:32:27 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:27:13.020 21:32:27 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:13.020 21:32:27 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:27:13.020 21:32:27 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:27:13.020 21:32:27 -- nvmf/common.sh@7 -- # uname -s 00:27:13.020 21:32:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.020 21:32:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.020 21:32:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.020 21:32:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.020 21:32:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.020 21:32:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.020 21:32:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.020 21:32:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.020 21:32:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.020 21:32:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.020 21:32:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:27:13.020 21:32:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:27:13.020 21:32:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.020 21:32:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.020 21:32:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:13.020 21:32:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:13.020 21:32:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:13.020 21:32:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.020 21:32:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.020 21:32:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.020 21:32:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.020 21:32:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.020 21:32:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.020 21:32:27 -- paths/export.sh@5 -- # export PATH 00:27:13.020 21:32:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.020 21:32:27 -- nvmf/common.sh@47 -- # : 0 00:27:13.020 21:32:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:13.020 21:32:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:13.020 21:32:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:13.020 21:32:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.020 21:32:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.020 21:32:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:13.020 21:32:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:13.020 21:32:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:13.020 21:32:27 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:13.020 21:32:27 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:13.020 21:32:27 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:13.020 21:32:27 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:13.020 21:32:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:13.020 21:32:27 -- common/autotest_common.sh@10 -- # set +x 00:27:13.020 21:32:27 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:13.020 21:32:27 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1372040 00:27:13.020 21:32:27 -- spdkcli/common.sh@34 -- # waitforlisten 1372040 00:27:13.020 21:32:27 -- common/autotest_common.sh@817 -- # '[' -z 1372040 ']' 00:27:13.020 21:32:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.020 21:32:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:13.020 21:32:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.020 21:32:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:13.020 21:32:27 -- common/autotest_common.sh@10 -- # set +x 00:27:13.020 21:32:27 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:13.020 [2024-04-24 21:32:27.977864] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:27:13.020 [2024-04-24 21:32:27.977933] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1372040 ] 00:27:13.281 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.281 [2024-04-24 21:32:28.064278] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:13.281 [2024-04-24 21:32:28.159947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.281 [2024-04-24 21:32:28.159954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.850 21:32:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:13.850 21:32:28 -- common/autotest_common.sh@850 -- # return 0 00:27:13.850 21:32:28 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:13.850 21:32:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:13.850 21:32:28 -- common/autotest_common.sh@10 -- # set +x 00:27:13.850 21:32:28 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:13.850 21:32:28 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:13.850 21:32:28 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:13.850 21:32:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:13.850 21:32:28 -- common/autotest_common.sh@10 -- # set +x 00:27:13.850 21:32:28 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:13.850 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:13.850 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:13.850 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:13.850 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:13.850 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:13.850 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:13.850 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:13.850 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:13.850 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:13.850 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:13.850 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:13.850 ' 00:27:14.109 [2024-04-24 21:32:29.047860] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:16.643 [2024-04-24 21:32:31.100528] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.582 [2024-04-24 21:32:32.262303] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:19.487 [2024-04-24 21:32:34.392669] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:21.394 [2024-04-24 21:32:36.222738] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:22.772 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:22.772 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:22.772 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:22.772 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:22.772 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:22.772 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:22.772 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:22.772 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:22.772 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:22.772 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:22.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:22.772 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:23.030 21:32:37 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:23.030 21:32:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:23.030 21:32:37 -- common/autotest_common.sh@10 -- # set +x 00:27:23.030 21:32:37 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:23.030 21:32:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:23.030 21:32:37 -- common/autotest_common.sh@10 -- # set +x 00:27:23.030 21:32:37 -- spdkcli/nvmf.sh@69 -- # check_match 00:27:23.030 21:32:37 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:23.289 21:32:38 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:23.289 21:32:38 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:23.289 21:32:38 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:23.289 21:32:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:23.289 21:32:38 -- common/autotest_common.sh@10 -- # set +x 00:27:23.289 21:32:38 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:23.289 21:32:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:23.289 21:32:38 -- common/autotest_common.sh@10 -- # set +x 00:27:23.289 21:32:38 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:23.289 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:23.289 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:23.289 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:23.289 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:23.289 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:23.289 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:23.289 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:23.289 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:23.289 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:23.289 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:23.289 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:23.289 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:23.289 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:23.289 ' 00:27:28.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:28.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:28.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:28.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:28.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:28.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:28.573 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:28.573 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:28.573 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:28.573 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:28.573 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:28.573 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:28.573 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:28.573 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:28.573 21:32:43 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:28.573 21:32:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:28.573 21:32:43 -- common/autotest_common.sh@10 -- # set +x 00:27:28.573 21:32:43 -- spdkcli/nvmf.sh@90 -- # killprocess 1372040 00:27:28.573 21:32:43 -- common/autotest_common.sh@936 -- # '[' -z 1372040 ']' 00:27:28.573 21:32:43 -- common/autotest_common.sh@940 -- # kill -0 1372040 00:27:28.573 21:32:43 -- common/autotest_common.sh@941 -- # uname 00:27:28.573 21:32:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:28.573 21:32:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1372040 00:27:28.573 21:32:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:28.573 21:32:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:28.573 21:32:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1372040' 00:27:28.573 killing process with pid 1372040 00:27:28.573 21:32:43 -- common/autotest_common.sh@955 -- # kill 1372040 00:27:28.573 [2024-04-24 21:32:43.207504] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:28.573 21:32:43 -- common/autotest_common.sh@960 -- # wait 1372040 00:27:28.833 21:32:43 -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:28.833 21:32:43 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:28.833 21:32:43 -- spdkcli/common.sh@13 -- # '[' -n 1372040 ']' 00:27:28.833 21:32:43 -- spdkcli/common.sh@14 -- # killprocess 1372040 00:27:28.833 21:32:43 -- common/autotest_common.sh@936 -- # '[' -z 1372040 ']' 00:27:28.833 21:32:43 -- common/autotest_common.sh@940 -- # kill -0 1372040 00:27:28.833 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1372040) - No such process 00:27:28.833 21:32:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1372040 is not found' 00:27:28.833 Process with pid 1372040 is not found 00:27:28.833 21:32:43 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:28.833 21:32:43 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:28.833 21:32:43 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:28.833 00:27:28.833 real 0m15.831s 00:27:28.833 user 0m32.103s 00:27:28.833 sys 0m0.703s 00:27:28.833 21:32:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:28.833 21:32:43 -- common/autotest_common.sh@10 -- # set +x 00:27:28.833 ************************************ 00:27:28.833 END TEST spdkcli_nvmf_tcp 00:27:28.833 ************************************ 00:27:28.833 21:32:43 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:28.833 21:32:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:28.833 21:32:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:28.833 21:32:43 -- common/autotest_common.sh@10 -- # set +x 00:27:29.093 ************************************ 00:27:29.093 START TEST nvmf_identify_passthru 00:27:29.093 ************************************ 00:27:29.093 21:32:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:29.093 * Looking for test storage... 00:27:29.093 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:27:29.093 21:32:43 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:27:29.093 21:32:43 -- nvmf/common.sh@7 -- # uname -s 00:27:29.093 21:32:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.093 21:32:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.093 21:32:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.094 21:32:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.094 21:32:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.094 21:32:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.094 21:32:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.094 21:32:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.094 21:32:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.094 21:32:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.094 21:32:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:27:29.094 21:32:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:27:29.094 21:32:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.094 21:32:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.094 21:32:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:29.094 21:32:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.094 21:32:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:29.094 21:32:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.094 21:32:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.094 21:32:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.094 21:32:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.094 21:32:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.094 21:32:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.094 21:32:43 -- paths/export.sh@5 -- # export PATH 00:27:29.094 21:32:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.094 21:32:43 -- nvmf/common.sh@47 -- # : 0 00:27:29.094 21:32:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:29.094 21:32:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:29.094 21:32:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:29.094 21:32:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.094 21:32:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.094 21:32:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:29.094 21:32:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:29.094 21:32:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:29.094 21:32:43 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:29.094 21:32:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.094 21:32:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.094 21:32:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.094 21:32:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.094 21:32:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.094 21:32:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.094 21:32:43 -- paths/export.sh@5 -- # export PATH 00:27:29.094 21:32:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.094 21:32:43 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:29.094 21:32:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:29.094 21:32:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.094 21:32:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:29.094 21:32:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:29.094 21:32:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:29.094 21:32:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.094 21:32:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:29.094 21:32:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.094 21:32:43 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:27:29.094 21:32:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:29.094 21:32:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:29.094 21:32:43 -- common/autotest_common.sh@10 -- # set +x 00:27:34.374 21:32:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:34.374 21:32:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:34.374 21:32:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:34.374 21:32:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:34.374 21:32:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:34.374 21:32:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:34.374 21:32:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:34.374 21:32:49 -- nvmf/common.sh@295 -- # net_devs=() 00:27:34.374 21:32:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:34.374 21:32:49 -- nvmf/common.sh@296 -- # e810=() 00:27:34.374 21:32:49 -- nvmf/common.sh@296 -- # local -ga e810 00:27:34.374 21:32:49 -- nvmf/common.sh@297 -- # x722=() 00:27:34.374 21:32:49 -- nvmf/common.sh@297 -- # local -ga x722 00:27:34.374 21:32:49 -- nvmf/common.sh@298 -- # mlx=() 00:27:34.374 21:32:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:34.374 21:32:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.374 21:32:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.374 21:32:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.374 21:32:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.374 21:32:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.374 21:32:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.374 21:32:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.374 21:32:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.374 21:32:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.374 21:32:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.374 21:32:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.374 21:32:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:34.374 21:32:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:34.374 21:32:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.374 21:32:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:27:34.374 Found 0000:27:00.0 (0x8086 - 0x159b) 00:27:34.374 21:32:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:34.374 21:32:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:27:34.374 Found 0000:27:00.1 (0x8086 - 0x159b) 00:27:34.374 21:32:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:34.374 21:32:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:34.374 21:32:49 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:27:34.375 21:32:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.375 21:32:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.375 21:32:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:34.375 21:32:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.375 21:32:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:27:34.375 Found net devices under 0000:27:00.0: cvl_0_0 00:27:34.375 21:32:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.375 21:32:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:34.375 21:32:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.375 21:32:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:34.375 21:32:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.375 21:32:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:27:34.375 Found net devices under 0000:27:00.1: cvl_0_1 00:27:34.375 21:32:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.375 21:32:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:34.375 21:32:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:34.375 21:32:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:34.375 21:32:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:34.375 21:32:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:34.375 21:32:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:34.375 21:32:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.375 21:32:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:34.375 21:32:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:34.375 21:32:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:34.375 21:32:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:34.375 21:32:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:34.375 21:32:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:34.375 21:32:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.375 21:32:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:34.375 21:32:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:34.375 21:32:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:34.375 21:32:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:34.375 21:32:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:34.375 21:32:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:34.375 21:32:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:34.375 21:32:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:34.375 21:32:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:34.375 21:32:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:34.375 21:32:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:34.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:27:34.375 00:27:34.375 --- 10.0.0.2 ping statistics --- 00:27:34.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.375 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:27:34.375 21:32:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:34.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:27:34.375 00:27:34.375 --- 10.0.0.1 ping statistics --- 00:27:34.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.375 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:27:34.375 21:32:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.375 21:32:49 -- nvmf/common.sh@411 -- # return 0 00:27:34.375 21:32:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:34.375 21:32:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.375 21:32:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:34.375 21:32:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:34.375 21:32:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.375 21:32:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:34.375 21:32:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:34.635 21:32:49 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:34.635 21:32:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:34.635 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:27:34.635 21:32:49 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:34.635 21:32:49 -- common/autotest_common.sh@1510 -- # bdfs=() 00:27:34.635 21:32:49 -- common/autotest_common.sh@1510 -- # local bdfs 00:27:34.635 21:32:49 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:27:34.635 21:32:49 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:27:34.635 21:32:49 -- common/autotest_common.sh@1499 -- # bdfs=() 00:27:34.635 21:32:49 -- common/autotest_common.sh@1499 -- # local bdfs 00:27:34.635 21:32:49 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:34.635 21:32:49 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:27:34.635 21:32:49 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:34.635 21:32:49 -- common/autotest_common.sh@1501 -- # (( 3 == 0 )) 00:27:34.635 21:32:49 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 0000:cb:00.0 00:27:34.635 21:32:49 -- common/autotest_common.sh@1513 -- # echo 0000:c9:00.0 00:27:34.635 21:32:49 -- target/identify_passthru.sh@16 -- # bdf=0000:c9:00.0 00:27:34.635 21:32:49 -- target/identify_passthru.sh@17 -- # '[' -z 0000:c9:00.0 ']' 00:27:34.635 21:32:49 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:c9:00.0' -i 0 00:27:34.635 21:32:49 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:34.635 21:32:49 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:34.635 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.905 21:32:54 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ941300BK2P0BGN 00:27:39.905 21:32:54 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:c9:00.0' -i 0 00:27:39.905 21:32:54 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:39.905 21:32:54 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:39.905 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.179 21:32:59 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:45.179 21:32:59 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:45.179 21:32:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:45.179 21:32:59 -- common/autotest_common.sh@10 -- # set +x 00:27:45.179 21:32:59 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:45.179 21:32:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:45.179 21:32:59 -- common/autotest_common.sh@10 -- # set +x 00:27:45.179 21:32:59 -- target/identify_passthru.sh@31 -- # nvmfpid=1380681 00:27:45.179 21:32:59 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:45.179 21:32:59 -- target/identify_passthru.sh@35 -- # waitforlisten 1380681 00:27:45.179 21:32:59 -- common/autotest_common.sh@817 -- # '[' -z 1380681 ']' 00:27:45.179 21:32:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.179 21:32:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:45.179 21:32:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.179 21:32:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:45.179 21:32:59 -- common/autotest_common.sh@10 -- # set +x 00:27:45.179 21:32:59 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:45.179 [2024-04-24 21:33:00.055658] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:27:45.179 [2024-04-24 21:33:00.055774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.179 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.437 [2024-04-24 21:33:00.182158] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:45.437 [2024-04-24 21:33:00.277731] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.437 [2024-04-24 21:33:00.277770] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.437 [2024-04-24 21:33:00.277782] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.437 [2024-04-24 21:33:00.277791] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.437 [2024-04-24 21:33:00.277798] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.437 [2024-04-24 21:33:00.277958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.437 [2024-04-24 21:33:00.278061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.437 [2024-04-24 21:33:00.278170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.437 [2024-04-24 21:33:00.278181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:46.003 21:33:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:46.003 21:33:00 -- common/autotest_common.sh@850 -- # return 0 00:27:46.003 21:33:00 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:46.003 21:33:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.003 21:33:00 -- common/autotest_common.sh@10 -- # set +x 00:27:46.003 INFO: Log level set to 20 00:27:46.003 INFO: Requests: 00:27:46.003 { 00:27:46.003 "jsonrpc": "2.0", 00:27:46.003 "method": "nvmf_set_config", 00:27:46.003 "id": 1, 00:27:46.003 "params": { 00:27:46.003 "admin_cmd_passthru": { 00:27:46.003 "identify_ctrlr": true 00:27:46.003 } 00:27:46.003 } 00:27:46.003 } 00:27:46.003 00:27:46.003 INFO: response: 00:27:46.003 { 00:27:46.003 "jsonrpc": "2.0", 00:27:46.003 "id": 1, 00:27:46.003 "result": true 00:27:46.003 } 00:27:46.003 00:27:46.003 21:33:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.003 21:33:00 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:46.003 21:33:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.003 21:33:00 -- common/autotest_common.sh@10 -- # set +x 00:27:46.003 INFO: Setting log level to 20 00:27:46.003 INFO: Setting log level to 20 00:27:46.003 INFO: Log level set to 20 00:27:46.003 INFO: Log level set to 20 00:27:46.003 INFO: Requests: 00:27:46.003 { 00:27:46.003 "jsonrpc": "2.0", 00:27:46.003 "method": "framework_start_init", 00:27:46.003 "id": 1 00:27:46.003 } 00:27:46.003 00:27:46.003 INFO: Requests: 00:27:46.003 { 00:27:46.003 "jsonrpc": "2.0", 00:27:46.003 "method": "framework_start_init", 00:27:46.003 "id": 1 00:27:46.003 } 00:27:46.003 00:27:46.003 [2024-04-24 21:33:00.906281] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:46.003 INFO: response: 00:27:46.003 { 00:27:46.003 "jsonrpc": "2.0", 00:27:46.003 "id": 1, 00:27:46.003 "result": true 00:27:46.003 } 00:27:46.003 00:27:46.003 INFO: response: 00:27:46.003 { 00:27:46.003 "jsonrpc": "2.0", 00:27:46.003 "id": 1, 00:27:46.003 "result": true 00:27:46.003 } 00:27:46.003 00:27:46.003 21:33:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.003 21:33:00 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:46.003 21:33:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.003 21:33:00 -- common/autotest_common.sh@10 -- # set +x 00:27:46.003 INFO: Setting log level to 40 00:27:46.003 INFO: Setting log level to 40 00:27:46.003 INFO: Setting log level to 40 00:27:46.003 [2024-04-24 21:33:00.920462] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.003 21:33:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.003 21:33:00 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:46.003 21:33:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:46.003 21:33:00 -- common/autotest_common.sh@10 -- # set +x 00:27:46.003 21:33:00 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:c9:00.0 00:27:46.003 21:33:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.003 21:33:00 -- common/autotest_common.sh@10 -- # set +x 00:27:49.298 Nvme0n1 00:27:49.298 21:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.298 21:33:03 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:49.298 21:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.298 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:27:49.298 21:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.298 21:33:03 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:49.298 21:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.298 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:27:49.298 21:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.298 21:33:03 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:49.298 21:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.298 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:27:49.298 [2024-04-24 21:33:03.825938] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.298 21:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.298 21:33:03 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:49.298 21:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.298 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:27:49.298 [2024-04-24 21:33:03.833656] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:49.298 [ 00:27:49.298 { 00:27:49.298 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:49.298 "subtype": "Discovery", 00:27:49.298 "listen_addresses": [], 00:27:49.298 "allow_any_host": true, 00:27:49.298 "hosts": [] 00:27:49.298 }, 00:27:49.298 { 00:27:49.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:49.298 "subtype": "NVMe", 00:27:49.298 "listen_addresses": [ 00:27:49.298 { 00:27:49.298 "transport": "TCP", 00:27:49.298 "trtype": "TCP", 00:27:49.298 "adrfam": "IPv4", 00:27:49.298 "traddr": "10.0.0.2", 00:27:49.298 "trsvcid": "4420" 00:27:49.298 } 00:27:49.298 ], 00:27:49.298 "allow_any_host": true, 00:27:49.298 "hosts": [], 00:27:49.298 "serial_number": "SPDK00000000000001", 00:27:49.298 "model_number": "SPDK bdev Controller", 00:27:49.298 "max_namespaces": 1, 00:27:49.298 "min_cntlid": 1, 00:27:49.298 "max_cntlid": 65519, 00:27:49.298 "namespaces": [ 00:27:49.298 { 00:27:49.298 "nsid": 1, 00:27:49.298 "bdev_name": "Nvme0n1", 00:27:49.298 "name": "Nvme0n1", 00:27:49.298 "nguid": "38AF5D3113184FDFBFBC778C7A7D01D8", 00:27:49.298 "uuid": "38af5d31-1318-4fdf-bfbc-778c7a7d01d8" 00:27:49.298 } 00:27:49.298 ] 00:27:49.298 } 00:27:49.298 ] 00:27:49.298 21:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.298 21:33:03 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:49.298 21:33:03 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:49.298 21:33:03 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:49.298 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.298 21:33:04 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ941300BK2P0BGN 00:27:49.298 21:33:04 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:49.298 21:33:04 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:49.298 21:33:04 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:49.298 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.559 21:33:04 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:49.559 21:33:04 -- target/identify_passthru.sh@63 -- # '[' PHLJ941300BK2P0BGN '!=' PHLJ941300BK2P0BGN ']' 00:27:49.559 21:33:04 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:49.559 21:33:04 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:49.559 21:33:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:49.559 21:33:04 -- common/autotest_common.sh@10 -- # set +x 00:27:49.559 21:33:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:49.559 21:33:04 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:49.559 21:33:04 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:49.559 21:33:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:49.559 21:33:04 -- nvmf/common.sh@117 -- # sync 00:27:49.559 21:33:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:49.559 21:33:04 -- nvmf/common.sh@120 -- # set +e 00:27:49.559 21:33:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:49.559 21:33:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:49.560 rmmod nvme_tcp 00:27:49.560 rmmod nvme_fabrics 00:27:49.560 rmmod nvme_keyring 00:27:49.560 21:33:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:49.560 21:33:04 -- nvmf/common.sh@124 -- # set -e 00:27:49.560 21:33:04 -- nvmf/common.sh@125 -- # return 0 00:27:49.560 21:33:04 -- nvmf/common.sh@478 -- # '[' -n 1380681 ']' 00:27:49.560 21:33:04 -- nvmf/common.sh@479 -- # killprocess 1380681 00:27:49.560 21:33:04 -- common/autotest_common.sh@936 -- # '[' -z 1380681 ']' 00:27:49.560 21:33:04 -- common/autotest_common.sh@940 -- # kill -0 1380681 00:27:49.560 21:33:04 -- common/autotest_common.sh@941 -- # uname 00:27:49.560 21:33:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:49.560 21:33:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1380681 00:27:49.560 21:33:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:49.560 21:33:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:49.560 21:33:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1380681' 00:27:49.560 killing process with pid 1380681 00:27:49.560 21:33:04 -- common/autotest_common.sh@955 -- # kill 1380681 00:27:49.560 [2024-04-24 21:33:04.504795] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:49.560 21:33:04 -- common/autotest_common.sh@960 -- # wait 1380681 00:27:52.931 21:33:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:52.931 21:33:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:52.931 21:33:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:52.931 21:33:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:52.931 21:33:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:52.931 21:33:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.931 21:33:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:52.931 21:33:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.311 21:33:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:54.571 00:27:54.571 real 0m25.468s 00:27:54.571 user 0m36.614s 00:27:54.571 sys 0m5.187s 00:27:54.571 21:33:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:54.571 21:33:09 -- common/autotest_common.sh@10 -- # set +x 00:27:54.571 ************************************ 00:27:54.571 END TEST nvmf_identify_passthru 00:27:54.571 ************************************ 00:27:54.571 21:33:09 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:54.571 21:33:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:54.571 21:33:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:54.571 21:33:09 -- common/autotest_common.sh@10 -- # set +x 00:27:54.571 ************************************ 00:27:54.571 START TEST nvmf_dif 00:27:54.571 ************************************ 00:27:54.571 21:33:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:54.571 * Looking for test storage... 00:27:54.571 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:27:54.571 21:33:09 -- target/dif.sh@13 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.571 21:33:09 -- nvmf/common.sh@7 -- # uname -s 00:27:54.571 21:33:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.571 21:33:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.571 21:33:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.571 21:33:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.571 21:33:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.571 21:33:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.571 21:33:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.571 21:33:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.571 21:33:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.571 21:33:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.571 21:33:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:27:54.571 21:33:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:27:54.571 21:33:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.571 21:33:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.571 21:33:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:54.571 21:33:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.571 21:33:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:54.571 21:33:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.571 21:33:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.571 21:33:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.571 21:33:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.572 21:33:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.572 21:33:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.572 21:33:09 -- paths/export.sh@5 -- # export PATH 00:27:54.572 21:33:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.572 21:33:09 -- nvmf/common.sh@47 -- # : 0 00:27:54.572 21:33:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:54.572 21:33:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:54.572 21:33:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.572 21:33:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.572 21:33:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.572 21:33:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:54.572 21:33:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:54.572 21:33:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:54.572 21:33:09 -- target/dif.sh@15 -- # NULL_META=16 00:27:54.572 21:33:09 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:54.572 21:33:09 -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:54.572 21:33:09 -- target/dif.sh@15 -- # NULL_DIF=1 00:27:54.572 21:33:09 -- target/dif.sh@135 -- # nvmftestinit 00:27:54.572 21:33:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:54.572 21:33:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.572 21:33:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:54.572 21:33:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:54.572 21:33:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:54.572 21:33:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.572 21:33:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:54.572 21:33:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.572 21:33:09 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:27:54.572 21:33:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:54.572 21:33:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:54.572 21:33:09 -- common/autotest_common.sh@10 -- # set +x 00:28:01.143 21:33:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:01.143 21:33:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:01.143 21:33:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:01.143 21:33:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:01.143 21:33:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:01.143 21:33:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:01.143 21:33:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:01.143 21:33:15 -- nvmf/common.sh@295 -- # net_devs=() 00:28:01.143 21:33:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:01.143 21:33:15 -- nvmf/common.sh@296 -- # e810=() 00:28:01.143 21:33:15 -- nvmf/common.sh@296 -- # local -ga e810 00:28:01.143 21:33:15 -- nvmf/common.sh@297 -- # x722=() 00:28:01.143 21:33:15 -- nvmf/common.sh@297 -- # local -ga x722 00:28:01.143 21:33:15 -- nvmf/common.sh@298 -- # mlx=() 00:28:01.143 21:33:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:01.143 21:33:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.143 21:33:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.143 21:33:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.143 21:33:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.143 21:33:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.143 21:33:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.143 21:33:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.143 21:33:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.143 21:33:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.143 21:33:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.143 21:33:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.143 21:33:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:01.143 21:33:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:01.143 21:33:15 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:28:01.143 21:33:15 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:28:01.143 21:33:15 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:01.144 21:33:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.144 21:33:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:28:01.144 Found 0000:27:00.0 (0x8086 - 0x159b) 00:28:01.144 21:33:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.144 21:33:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:28:01.144 Found 0000:27:00.1 (0x8086 - 0x159b) 00:28:01.144 21:33:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:01.144 21:33:15 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.144 21:33:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.144 21:33:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:01.144 21:33:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.144 21:33:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:28:01.144 Found net devices under 0000:27:00.0: cvl_0_0 00:28:01.144 21:33:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.144 21:33:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.144 21:33:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.144 21:33:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:01.144 21:33:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.144 21:33:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:28:01.144 Found net devices under 0000:27:00.1: cvl_0_1 00:28:01.144 21:33:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.144 21:33:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:01.144 21:33:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:01.144 21:33:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:01.144 21:33:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:01.144 21:33:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.144 21:33:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.144 21:33:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.144 21:33:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:01.144 21:33:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:01.144 21:33:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:01.144 21:33:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:01.144 21:33:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:01.144 21:33:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.144 21:33:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:01.144 21:33:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:01.144 21:33:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:01.144 21:33:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:01.144 21:33:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:01.144 21:33:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.144 21:33:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:01.144 21:33:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.144 21:33:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.144 21:33:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.144 21:33:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:01.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:28:01.144 00:28:01.144 --- 10.0.0.2 ping statistics --- 00:28:01.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.144 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:28:01.144 21:33:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:28:01.144 00:28:01.144 --- 10.0.0.1 ping statistics --- 00:28:01.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.144 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:28:01.144 21:33:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.144 21:33:15 -- nvmf/common.sh@411 -- # return 0 00:28:01.144 21:33:15 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:01.144 21:33:15 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:28:03.056 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:03.056 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:03.056 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:03.056 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:03.056 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:03.056 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:03.056 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:03.056 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:03.056 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:03.056 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:03.057 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:03.057 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:03.057 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:03.057 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:03.057 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:03.057 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:03.057 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:03.057 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:03.057 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:03.624 21:33:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.624 21:33:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:03.624 21:33:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:03.624 21:33:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.624 21:33:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:03.624 21:33:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:03.624 21:33:18 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:03.624 21:33:18 -- target/dif.sh@137 -- # nvmfappstart 00:28:03.624 21:33:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:03.624 21:33:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:03.624 21:33:18 -- common/autotest_common.sh@10 -- # set +x 00:28:03.624 21:33:18 -- nvmf/common.sh@470 -- # nvmfpid=1387932 00:28:03.624 21:33:18 -- nvmf/common.sh@471 -- # waitforlisten 1387932 00:28:03.624 21:33:18 -- common/autotest_common.sh@817 -- # '[' -z 1387932 ']' 00:28:03.624 21:33:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.624 21:33:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:03.624 21:33:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.624 21:33:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:03.624 21:33:18 -- common/autotest_common.sh@10 -- # set +x 00:28:03.624 21:33:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:03.624 [2024-04-24 21:33:18.493053] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:28:03.624 [2024-04-24 21:33:18.493153] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.624 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.883 [2024-04-24 21:33:18.610552] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.883 [2024-04-24 21:33:18.702459] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.883 [2024-04-24 21:33:18.702493] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.883 [2024-04-24 21:33:18.702503] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.883 [2024-04-24 21:33:18.702514] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.883 [2024-04-24 21:33:18.702522] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.883 [2024-04-24 21:33:18.702549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.452 21:33:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:04.452 21:33:19 -- common/autotest_common.sh@850 -- # return 0 00:28:04.452 21:33:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:04.452 21:33:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:04.452 21:33:19 -- common/autotest_common.sh@10 -- # set +x 00:28:04.452 21:33:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.452 21:33:19 -- target/dif.sh@139 -- # create_transport 00:28:04.452 21:33:19 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:04.452 21:33:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.452 21:33:19 -- common/autotest_common.sh@10 -- # set +x 00:28:04.452 [2024-04-24 21:33:19.225622] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.452 21:33:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.452 21:33:19 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:04.452 21:33:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:04.452 21:33:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:04.452 21:33:19 -- common/autotest_common.sh@10 -- # set +x 00:28:04.452 ************************************ 00:28:04.452 START TEST fio_dif_1_default 00:28:04.452 ************************************ 00:28:04.452 21:33:19 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:28:04.452 21:33:19 -- target/dif.sh@86 -- # create_subsystems 0 00:28:04.452 21:33:19 -- target/dif.sh@28 -- # local sub 00:28:04.452 21:33:19 -- target/dif.sh@30 -- # for sub in "$@" 00:28:04.452 21:33:19 -- target/dif.sh@31 -- # create_subsystem 0 00:28:04.452 21:33:19 -- target/dif.sh@18 -- # local sub_id=0 00:28:04.452 21:33:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:04.452 21:33:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.452 21:33:19 -- common/autotest_common.sh@10 -- # set +x 00:28:04.452 bdev_null0 00:28:04.452 21:33:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.452 21:33:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:04.452 21:33:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.452 21:33:19 -- common/autotest_common.sh@10 -- # set +x 00:28:04.452 21:33:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.452 21:33:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:04.452 21:33:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.452 21:33:19 -- common/autotest_common.sh@10 -- # set +x 00:28:04.452 21:33:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.452 21:33:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:04.452 21:33:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.452 21:33:19 -- common/autotest_common.sh@10 -- # set +x 00:28:04.452 [2024-04-24 21:33:19.361777] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.452 21:33:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.452 21:33:19 -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:04.452 21:33:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.452 21:33:19 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.452 21:33:19 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:04.452 21:33:19 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:04.452 21:33:19 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:04.452 21:33:19 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:04.452 21:33:19 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.452 21:33:19 -- common/autotest_common.sh@1327 -- # shift 00:28:04.452 21:33:19 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:04.452 21:33:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:04.452 21:33:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:04.452 21:33:19 -- nvmf/common.sh@521 -- # config=() 00:28:04.452 21:33:19 -- nvmf/common.sh@521 -- # local subsystem config 00:28:04.453 21:33:19 -- target/dif.sh@82 -- # gen_fio_conf 00:28:04.453 21:33:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:04.453 21:33:19 -- target/dif.sh@54 -- # local file 00:28:04.453 21:33:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:04.453 { 00:28:04.453 "params": { 00:28:04.453 "name": "Nvme$subsystem", 00:28:04.453 "trtype": "$TEST_TRANSPORT", 00:28:04.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.453 "adrfam": "ipv4", 00:28:04.453 "trsvcid": "$NVMF_PORT", 00:28:04.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.453 "hdgst": ${hdgst:-false}, 00:28:04.453 "ddgst": ${ddgst:-false} 00:28:04.453 }, 00:28:04.453 "method": "bdev_nvme_attach_controller" 00:28:04.453 } 00:28:04.453 EOF 00:28:04.453 )") 00:28:04.453 21:33:19 -- target/dif.sh@56 -- # cat 00:28:04.453 21:33:19 -- nvmf/common.sh@543 -- # cat 00:28:04.453 21:33:19 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:04.453 21:33:19 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.453 21:33:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:04.453 21:33:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:04.453 21:33:19 -- target/dif.sh@72 -- # (( file <= files )) 00:28:04.453 21:33:19 -- nvmf/common.sh@545 -- # jq . 00:28:04.453 21:33:19 -- nvmf/common.sh@546 -- # IFS=, 00:28:04.453 21:33:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:04.453 "params": { 00:28:04.453 "name": "Nvme0", 00:28:04.453 "trtype": "tcp", 00:28:04.453 "traddr": "10.0.0.2", 00:28:04.453 "adrfam": "ipv4", 00:28:04.453 "trsvcid": "4420", 00:28:04.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:04.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:04.453 "hdgst": false, 00:28:04.453 "ddgst": false 00:28:04.453 }, 00:28:04.453 "method": "bdev_nvme_attach_controller" 00:28:04.453 }' 00:28:04.453 21:33:19 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:04.453 21:33:19 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:04.453 21:33:19 -- common/autotest_common.sh@1333 -- # break 00:28:04.453 21:33:19 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:04.453 21:33:19 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.046 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:05.046 fio-3.35 00:28:05.046 Starting 1 thread 00:28:05.046 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.251 00:28:17.251 filename0: (groupid=0, jobs=1): err= 0: pid=1388441: Wed Apr 24 21:33:30 2024 00:28:17.251 read: IOPS=143, BW=573KiB/s (586kB/s)(5744KiB/10031msec) 00:28:17.251 slat (nsec): min=5934, max=47532, avg=7287.29, stdev=2777.72 00:28:17.251 clat (usec): min=554, max=42157, avg=27918.80, stdev=19124.61 00:28:17.251 lat (usec): min=561, max=42204, avg=27926.08, stdev=19124.97 00:28:17.251 clat percentiles (usec): 00:28:17.251 | 1.00th=[ 611], 5.00th=[ 766], 10.00th=[ 775], 20.00th=[ 783], 00:28:17.251 | 30.00th=[ 807], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:17.251 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:28:17.251 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:28:17.251 | 99.99th=[42206] 00:28:17.251 bw ( KiB/s): min= 352, max= 768, per=99.89%, avg=572.80, stdev=185.40, samples=20 00:28:17.251 iops : min= 88, max= 192, avg=143.20, stdev=46.35, samples=20 00:28:17.251 lat (usec) : 750=3.13%, 1000=30.01% 00:28:17.251 lat (msec) : 50=66.85% 00:28:17.251 cpu : usr=95.71%, sys=3.96%, ctx=35, majf=0, minf=1634 00:28:17.251 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:17.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.251 issued rwts: total=1436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:17.251 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:17.251 00:28:17.251 Run status group 0 (all jobs): 00:28:17.251 READ: bw=573KiB/s (586kB/s), 573KiB/s-573KiB/s (586kB/s-586kB/s), io=5744KiB (5882kB), run=10031-10031msec 00:28:17.251 ----------------------------------------------------- 00:28:17.251 Suppressions used: 00:28:17.251 count bytes template 00:28:17.251 1 8 /usr/src/fio/parse.c 00:28:17.251 1 8 libtcmalloc_minimal.so 00:28:17.251 1 904 libcrypto.so 00:28:17.251 ----------------------------------------------------- 00:28:17.251 00:28:17.251 21:33:31 -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:17.251 21:33:31 -- target/dif.sh@43 -- # local sub 00:28:17.251 21:33:31 -- target/dif.sh@45 -- # for sub in "$@" 00:28:17.251 21:33:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:17.251 21:33:31 -- target/dif.sh@36 -- # local sub_id=0 00:28:17.251 21:33:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:17.251 21:33:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.251 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.251 21:33:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.251 21:33:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:17.251 21:33:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.251 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.251 21:33:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.251 00:28:17.251 real 0m11.853s 00:28:17.251 user 0m24.146s 00:28:17.251 sys 0m0.857s 00:28:17.251 21:33:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:17.251 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.251 ************************************ 00:28:17.251 END TEST fio_dif_1_default 00:28:17.251 ************************************ 00:28:17.251 21:33:31 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:17.251 21:33:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:17.251 21:33:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:17.251 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.251 ************************************ 00:28:17.251 START TEST fio_dif_1_multi_subsystems 00:28:17.251 ************************************ 00:28:17.251 21:33:31 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:28:17.251 21:33:31 -- target/dif.sh@92 -- # local files=1 00:28:17.251 21:33:31 -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:17.251 21:33:31 -- target/dif.sh@28 -- # local sub 00:28:17.251 21:33:31 -- target/dif.sh@30 -- # for sub in "$@" 00:28:17.252 21:33:31 -- target/dif.sh@31 -- # create_subsystem 0 00:28:17.252 21:33:31 -- target/dif.sh@18 -- # local sub_id=0 00:28:17.252 21:33:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:17.252 21:33:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.252 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.252 bdev_null0 00:28:17.252 21:33:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.252 21:33:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:17.252 21:33:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.252 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.252 21:33:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.252 21:33:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:17.252 21:33:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.252 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.252 21:33:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.252 21:33:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:17.252 21:33:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.252 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.252 [2024-04-24 21:33:31.342338] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.252 21:33:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.252 21:33:31 -- target/dif.sh@30 -- # for sub in "$@" 00:28:17.252 21:33:31 -- target/dif.sh@31 -- # create_subsystem 1 00:28:17.252 21:33:31 -- target/dif.sh@18 -- # local sub_id=1 00:28:17.252 21:33:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:17.252 21:33:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.252 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.252 bdev_null1 00:28:17.252 21:33:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.252 21:33:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:17.252 21:33:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.252 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.252 21:33:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.252 21:33:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:17.252 21:33:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.252 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.252 21:33:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.252 21:33:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.252 21:33:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.252 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.252 21:33:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.252 21:33:31 -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:17.252 21:33:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:17.252 21:33:31 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:17.252 21:33:31 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:17.252 21:33:31 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:17.252 21:33:31 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:17.252 21:33:31 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:17.252 21:33:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:17.252 21:33:31 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:17.252 21:33:31 -- common/autotest_common.sh@1327 -- # shift 00:28:17.252 21:33:31 -- nvmf/common.sh@521 -- # config=() 00:28:17.252 21:33:31 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:17.252 21:33:31 -- nvmf/common.sh@521 -- # local subsystem config 00:28:17.252 21:33:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:17.252 21:33:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:17.252 21:33:31 -- target/dif.sh@82 -- # gen_fio_conf 00:28:17.252 21:33:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:17.252 { 00:28:17.252 "params": { 00:28:17.252 "name": "Nvme$subsystem", 00:28:17.252 "trtype": "$TEST_TRANSPORT", 00:28:17.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.252 "adrfam": "ipv4", 00:28:17.252 "trsvcid": "$NVMF_PORT", 00:28:17.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.252 "hdgst": ${hdgst:-false}, 00:28:17.252 "ddgst": ${ddgst:-false} 00:28:17.252 }, 00:28:17.252 "method": "bdev_nvme_attach_controller" 00:28:17.252 } 00:28:17.252 EOF 00:28:17.252 )") 00:28:17.252 21:33:31 -- target/dif.sh@54 -- # local file 00:28:17.252 21:33:31 -- target/dif.sh@56 -- # cat 00:28:17.252 21:33:31 -- nvmf/common.sh@543 -- # cat 00:28:17.252 21:33:31 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:17.252 21:33:31 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:17.252 21:33:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:17.252 21:33:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:17.252 21:33:31 -- target/dif.sh@72 -- # (( file <= files )) 00:28:17.252 21:33:31 -- target/dif.sh@73 -- # cat 00:28:17.252 21:33:31 -- target/dif.sh@72 -- # (( file++ )) 00:28:17.252 21:33:31 -- target/dif.sh@72 -- # (( file <= files )) 00:28:17.252 21:33:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:17.252 21:33:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:17.252 { 00:28:17.252 "params": { 00:28:17.252 "name": "Nvme$subsystem", 00:28:17.252 "trtype": "$TEST_TRANSPORT", 00:28:17.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.252 "adrfam": "ipv4", 00:28:17.252 "trsvcid": "$NVMF_PORT", 00:28:17.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.252 "hdgst": ${hdgst:-false}, 00:28:17.252 "ddgst": ${ddgst:-false} 00:28:17.252 }, 00:28:17.252 "method": "bdev_nvme_attach_controller" 00:28:17.252 } 00:28:17.252 EOF 00:28:17.252 )") 00:28:17.252 21:33:31 -- nvmf/common.sh@543 -- # cat 00:28:17.252 21:33:31 -- nvmf/common.sh@545 -- # jq . 00:28:17.252 21:33:31 -- nvmf/common.sh@546 -- # IFS=, 00:28:17.252 21:33:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:17.252 "params": { 00:28:17.252 "name": "Nvme0", 00:28:17.252 "trtype": "tcp", 00:28:17.252 "traddr": "10.0.0.2", 00:28:17.252 "adrfam": "ipv4", 00:28:17.252 "trsvcid": "4420", 00:28:17.252 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:17.252 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:17.252 "hdgst": false, 00:28:17.252 "ddgst": false 00:28:17.252 }, 00:28:17.252 "method": "bdev_nvme_attach_controller" 00:28:17.253 },{ 00:28:17.253 "params": { 00:28:17.253 "name": "Nvme1", 00:28:17.253 "trtype": "tcp", 00:28:17.253 "traddr": "10.0.0.2", 00:28:17.253 "adrfam": "ipv4", 00:28:17.253 "trsvcid": "4420", 00:28:17.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:17.253 "hdgst": false, 00:28:17.253 "ddgst": false 00:28:17.253 }, 00:28:17.253 "method": "bdev_nvme_attach_controller" 00:28:17.253 }' 00:28:17.253 21:33:31 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:17.253 21:33:31 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:17.253 21:33:31 -- common/autotest_common.sh@1333 -- # break 00:28:17.253 21:33:31 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:17.253 21:33:31 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:17.253 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:17.253 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:17.253 fio-3.35 00:28:17.253 Starting 2 threads 00:28:17.253 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.458 00:28:29.458 filename0: (groupid=0, jobs=1): err= 0: pid=1390970: Wed Apr 24 21:33:42 2024 00:28:29.458 read: IOPS=142, BW=572KiB/s (585kB/s)(5728KiB/10018msec) 00:28:29.458 slat (nsec): min=5943, max=47415, avg=7133.39, stdev=2529.23 00:28:29.458 clat (usec): min=550, max=42223, avg=27960.91, stdev=19064.72 00:28:29.458 lat (usec): min=557, max=42270, avg=27968.04, stdev=19064.86 00:28:29.458 clat percentiles (usec): 00:28:29.458 | 1.00th=[ 619], 5.00th=[ 766], 10.00th=[ 775], 20.00th=[ 783], 00:28:29.458 | 30.00th=[ 840], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:29.458 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:28:29.458 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:28:29.458 | 99.99th=[42206] 00:28:29.458 bw ( KiB/s): min= 352, max= 768, per=59.72%, avg=571.20, stdev=186.09, samples=20 00:28:29.458 iops : min= 88, max= 192, avg=142.80, stdev=46.52, samples=20 00:28:29.459 lat (usec) : 750=3.28%, 1000=29.40% 00:28:29.459 lat (msec) : 2=0.28%, 50=67.04% 00:28:29.459 cpu : usr=97.98%, sys=1.72%, ctx=13, majf=0, minf=1635 00:28:29.459 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.459 issued rwts: total=1432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.459 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:29.459 filename1: (groupid=0, jobs=1): err= 0: pid=1390971: Wed Apr 24 21:33:42 2024 00:28:29.459 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10041msec) 00:28:29.459 slat (nsec): min=5954, max=47463, avg=7625.14, stdev=3148.07 00:28:29.459 clat (usec): min=40772, max=46849, avg=41469.49, stdev=586.53 00:28:29.459 lat (usec): min=40778, max=46896, avg=41477.11, stdev=587.17 00:28:29.459 clat percentiles (usec): 00:28:29.459 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:28:29.459 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:28:29.459 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:28:29.459 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:28:29.459 | 99.99th=[46924] 00:28:29.459 bw ( KiB/s): min= 352, max= 416, per=40.27%, avg=385.60, stdev=12.61, samples=20 00:28:29.459 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:28:29.459 lat (msec) : 50=100.00% 00:28:29.459 cpu : usr=98.06%, sys=1.62%, ctx=14, majf=0, minf=1636 00:28:29.459 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.459 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.459 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:29.459 00:28:29.459 Run status group 0 (all jobs): 00:28:29.459 READ: bw=956KiB/s (979kB/s), 386KiB/s-572KiB/s (395kB/s-585kB/s), io=9600KiB (9830kB), run=10018-10041msec 00:28:29.459 ----------------------------------------------------- 00:28:29.459 Suppressions used: 00:28:29.459 count bytes template 00:28:29.459 2 16 /usr/src/fio/parse.c 00:28:29.459 1 8 libtcmalloc_minimal.so 00:28:29.459 1 904 libcrypto.so 00:28:29.459 ----------------------------------------------------- 00:28:29.459 00:28:29.459 21:33:43 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:29.459 21:33:43 -- target/dif.sh@43 -- # local sub 00:28:29.459 21:33:43 -- target/dif.sh@45 -- # for sub in "$@" 00:28:29.459 21:33:43 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:29.459 21:33:43 -- target/dif.sh@36 -- # local sub_id=0 00:28:29.459 21:33:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:29.459 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.459 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.459 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.459 21:33:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:29.459 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.459 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.459 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.459 21:33:43 -- target/dif.sh@45 -- # for sub in "$@" 00:28:29.459 21:33:43 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:29.459 21:33:43 -- target/dif.sh@36 -- # local sub_id=1 00:28:29.459 21:33:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.459 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.459 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.459 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.459 21:33:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:29.459 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.459 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.459 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.459 00:28:29.459 real 0m12.000s 00:28:29.459 user 0m36.658s 00:28:29.459 sys 0m0.763s 00:28:29.459 21:33:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:29.459 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.459 ************************************ 00:28:29.459 END TEST fio_dif_1_multi_subsystems 00:28:29.459 ************************************ 00:28:29.459 21:33:43 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:29.459 21:33:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:29.459 21:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:29.459 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.459 ************************************ 00:28:29.459 START TEST fio_dif_rand_params 00:28:29.459 ************************************ 00:28:29.459 21:33:43 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:28:29.459 21:33:43 -- target/dif.sh@100 -- # local NULL_DIF 00:28:29.459 21:33:43 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:29.459 21:33:43 -- target/dif.sh@103 -- # NULL_DIF=3 00:28:29.459 21:33:43 -- target/dif.sh@103 -- # bs=128k 00:28:29.459 21:33:43 -- target/dif.sh@103 -- # numjobs=3 00:28:29.459 21:33:43 -- target/dif.sh@103 -- # iodepth=3 00:28:29.459 21:33:43 -- target/dif.sh@103 -- # runtime=5 00:28:29.459 21:33:43 -- target/dif.sh@105 -- # create_subsystems 0 00:28:29.459 21:33:43 -- target/dif.sh@28 -- # local sub 00:28:29.459 21:33:43 -- target/dif.sh@30 -- # for sub in "$@" 00:28:29.459 21:33:43 -- target/dif.sh@31 -- # create_subsystem 0 00:28:29.459 21:33:43 -- target/dif.sh@18 -- # local sub_id=0 00:28:29.459 21:33:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:29.459 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.459 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.459 bdev_null0 00:28:29.459 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.459 21:33:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:29.459 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.459 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.459 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.459 21:33:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:29.459 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.459 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.459 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.459 21:33:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:29.459 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:29.459 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:28:29.459 [2024-04-24 21:33:43.468854] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.459 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:29.459 21:33:43 -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:29.459 21:33:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.459 21:33:43 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.459 21:33:43 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:29.459 21:33:43 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:29.459 21:33:43 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:29.459 21:33:43 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:29.459 21:33:43 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.459 21:33:43 -- common/autotest_common.sh@1327 -- # shift 00:28:29.459 21:33:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:29.459 21:33:43 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:29.459 21:33:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:29.459 21:33:43 -- nvmf/common.sh@521 -- # config=() 00:28:29.459 21:33:43 -- target/dif.sh@82 -- # gen_fio_conf 00:28:29.459 21:33:43 -- nvmf/common.sh@521 -- # local subsystem config 00:28:29.459 21:33:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:29.459 21:33:43 -- target/dif.sh@54 -- # local file 00:28:29.459 21:33:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:29.459 { 00:28:29.459 "params": { 00:28:29.459 "name": "Nvme$subsystem", 00:28:29.459 "trtype": "$TEST_TRANSPORT", 00:28:29.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.459 "adrfam": "ipv4", 00:28:29.459 "trsvcid": "$NVMF_PORT", 00:28:29.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.459 "hdgst": ${hdgst:-false}, 00:28:29.459 "ddgst": ${ddgst:-false} 00:28:29.459 }, 00:28:29.459 "method": "bdev_nvme_attach_controller" 00:28:29.459 } 00:28:29.459 EOF 00:28:29.459 )") 00:28:29.459 21:33:43 -- target/dif.sh@56 -- # cat 00:28:29.459 21:33:43 -- nvmf/common.sh@543 -- # cat 00:28:29.459 21:33:43 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.459 21:33:43 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:29.459 21:33:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:29.459 21:33:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:29.459 21:33:43 -- target/dif.sh@72 -- # (( file <= files )) 00:28:29.459 21:33:43 -- nvmf/common.sh@545 -- # jq . 00:28:29.459 21:33:43 -- nvmf/common.sh@546 -- # IFS=, 00:28:29.459 21:33:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:29.459 "params": { 00:28:29.459 "name": "Nvme0", 00:28:29.459 "trtype": "tcp", 00:28:29.459 "traddr": "10.0.0.2", 00:28:29.459 "adrfam": "ipv4", 00:28:29.459 "trsvcid": "4420", 00:28:29.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:29.459 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:29.459 "hdgst": false, 00:28:29.459 "ddgst": false 00:28:29.459 }, 00:28:29.459 "method": "bdev_nvme_attach_controller" 00:28:29.459 }' 00:28:29.459 21:33:43 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:29.459 21:33:43 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:29.459 21:33:43 -- common/autotest_common.sh@1333 -- # break 00:28:29.460 21:33:43 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:29.460 21:33:43 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.460 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:29.460 ... 00:28:29.460 fio-3.35 00:28:29.460 Starting 3 threads 00:28:29.460 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.724 00:28:34.724 filename0: (groupid=0, jobs=1): err= 0: pid=1393493: Wed Apr 24 21:33:49 2024 00:28:34.724 read: IOPS=297, BW=37.1MiB/s (38.9MB/s)(187MiB/5045msec) 00:28:34.724 slat (nsec): min=5968, max=31215, avg=7271.10, stdev=1801.48 00:28:34.724 clat (usec): min=3396, max=51698, avg=10057.26, stdev=11770.26 00:28:34.724 lat (usec): min=3403, max=51705, avg=10064.53, stdev=11770.36 00:28:34.724 clat percentiles (usec): 00:28:34.724 | 1.00th=[ 3720], 5.00th=[ 3916], 10.00th=[ 4113], 20.00th=[ 4555], 00:28:34.724 | 30.00th=[ 5145], 40.00th=[ 5932], 50.00th=[ 6325], 60.00th=[ 7046], 00:28:34.724 | 70.00th=[ 8094], 80.00th=[ 9110], 90.00th=[11338], 95.00th=[46924], 00:28:34.724 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50594], 99.95th=[51643], 00:28:34.724 | 99.99th=[51643] 00:28:34.724 bw ( KiB/s): min=23552, max=61440, per=39.60%, avg=38304.70, stdev=10484.06, samples=10 00:28:34.724 iops : min= 184, max= 480, avg=299.20, stdev=81.92, samples=10 00:28:34.724 lat (msec) : 4=7.47%, 10=77.85%, 20=5.94%, 50=8.21%, 100=0.53% 00:28:34.724 cpu : usr=96.03%, sys=3.65%, ctx=8, majf=0, minf=1632 00:28:34.724 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:34.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.724 issued rwts: total=1499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:34.724 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:34.724 filename0: (groupid=0, jobs=1): err= 0: pid=1393494: Wed Apr 24 21:33:49 2024 00:28:34.724 read: IOPS=236, BW=29.6MiB/s (31.1MB/s)(149MiB/5013msec) 00:28:34.724 slat (nsec): min=5373, max=33710, avg=7272.49, stdev=1703.05 00:28:34.724 clat (usec): min=3429, max=52053, avg=12649.86, stdev=14387.11 00:28:34.724 lat (usec): min=3436, max=52061, avg=12657.14, stdev=14387.27 00:28:34.724 clat percentiles (usec): 00:28:34.724 | 1.00th=[ 3916], 5.00th=[ 4146], 10.00th=[ 4490], 20.00th=[ 5342], 00:28:34.724 | 30.00th=[ 6063], 40.00th=[ 6456], 50.00th=[ 7111], 60.00th=[ 7832], 00:28:34.724 | 70.00th=[ 8717], 80.00th=[ 9765], 90.00th=[46924], 95.00th=[48497], 00:28:34.724 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51643], 99.95th=[52167], 00:28:34.724 | 99.99th=[52167] 00:28:34.724 bw ( KiB/s): min=17920, max=41472, per=31.37%, avg=30336.00, stdev=7427.68, samples=10 00:28:34.724 iops : min= 140, max= 324, avg=237.00, stdev=58.03, samples=10 00:28:34.724 lat (msec) : 4=2.02%, 10=78.62%, 20=5.22%, 50=12.88%, 100=1.26% 00:28:34.724 cpu : usr=96.99%, sys=2.71%, ctx=8, majf=0, minf=1637 00:28:34.724 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:34.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.724 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:34.724 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:34.724 filename0: (groupid=0, jobs=1): err= 0: pid=1393495: Wed Apr 24 21:33:49 2024 00:28:34.724 read: IOPS=224, BW=28.1MiB/s (29.5MB/s)(141MiB/5004msec) 00:28:34.724 slat (nsec): min=5974, max=25118, avg=7635.96, stdev=2078.95 00:28:34.724 clat (usec): min=3227, max=53717, avg=13331.75, stdev=15088.96 00:28:34.724 lat (usec): min=3233, max=53742, avg=13339.39, stdev=15089.21 00:28:34.724 clat percentiles (usec): 00:28:34.724 | 1.00th=[ 3720], 5.00th=[ 4015], 10.00th=[ 4293], 20.00th=[ 4817], 00:28:34.724 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 7308], 60.00th=[ 8094], 00:28:34.724 | 70.00th=[ 9241], 80.00th=[10945], 90.00th=[47449], 95.00th=[49021], 00:28:34.724 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:28:34.724 | 99.99th=[53740] 00:28:34.724 bw ( KiB/s): min=17664, max=41472, per=29.70%, avg=28723.20, stdev=7911.46, samples=10 00:28:34.724 iops : min= 138, max= 324, avg=224.40, stdev=61.81, samples=10 00:28:34.724 lat (msec) : 4=4.62%, 10=69.87%, 20=10.31%, 50=11.29%, 100=3.91% 00:28:34.724 cpu : usr=97.10%, sys=2.58%, ctx=7, majf=0, minf=1634 00:28:34.724 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:34.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.724 issued rwts: total=1125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:34.724 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:34.724 00:28:34.724 Run status group 0 (all jobs): 00:28:34.724 READ: bw=94.4MiB/s (99.0MB/s), 28.1MiB/s-37.1MiB/s (29.5MB/s-38.9MB/s), io=477MiB (500MB), run=5004-5045msec 00:28:35.292 ----------------------------------------------------- 00:28:35.292 Suppressions used: 00:28:35.292 count bytes template 00:28:35.292 5 44 /usr/src/fio/parse.c 00:28:35.292 1 8 libtcmalloc_minimal.so 00:28:35.292 1 904 libcrypto.so 00:28:35.292 ----------------------------------------------------- 00:28:35.292 00:28:35.292 21:33:50 -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:35.292 21:33:50 -- target/dif.sh@43 -- # local sub 00:28:35.292 21:33:50 -- target/dif.sh@45 -- # for sub in "$@" 00:28:35.292 21:33:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:35.292 21:33:50 -- target/dif.sh@36 -- # local sub_id=0 00:28:35.292 21:33:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:35.292 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.292 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.292 21:33:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:35.292 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.292 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.292 21:33:50 -- target/dif.sh@109 -- # NULL_DIF=2 00:28:35.292 21:33:50 -- target/dif.sh@109 -- # bs=4k 00:28:35.292 21:33:50 -- target/dif.sh@109 -- # numjobs=8 00:28:35.292 21:33:50 -- target/dif.sh@109 -- # iodepth=16 00:28:35.292 21:33:50 -- target/dif.sh@109 -- # runtime= 00:28:35.292 21:33:50 -- target/dif.sh@109 -- # files=2 00:28:35.292 21:33:50 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:35.292 21:33:50 -- target/dif.sh@28 -- # local sub 00:28:35.292 21:33:50 -- target/dif.sh@30 -- # for sub in "$@" 00:28:35.292 21:33:50 -- target/dif.sh@31 -- # create_subsystem 0 00:28:35.292 21:33:50 -- target/dif.sh@18 -- # local sub_id=0 00:28:35.292 21:33:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:35.292 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.292 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 bdev_null0 00:28:35.292 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.292 21:33:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:35.292 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.292 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.292 21:33:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:35.292 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.292 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.292 21:33:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:35.292 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.292 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 [2024-04-24 21:33:50.187005] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.292 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.292 21:33:50 -- target/dif.sh@30 -- # for sub in "$@" 00:28:35.292 21:33:50 -- target/dif.sh@31 -- # create_subsystem 1 00:28:35.292 21:33:50 -- target/dif.sh@18 -- # local sub_id=1 00:28:35.292 21:33:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:35.292 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.292 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 bdev_null1 00:28:35.292 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.292 21:33:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:35.292 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.292 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.292 21:33:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:35.292 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.292 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.292 21:33:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.292 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.292 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.292 21:33:50 -- target/dif.sh@30 -- # for sub in "$@" 00:28:35.292 21:33:50 -- target/dif.sh@31 -- # create_subsystem 2 00:28:35.292 21:33:50 -- target/dif.sh@18 -- # local sub_id=2 00:28:35.292 21:33:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:35.292 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.292 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 bdev_null2 00:28:35.292 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.292 21:33:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:35.292 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.292 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.292 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.292 21:33:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:35.292 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.292 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.293 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.293 21:33:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:35.293 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.293 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.551 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.551 21:33:50 -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:35.551 21:33:50 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:35.551 21:33:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:35.551 21:33:50 -- nvmf/common.sh@521 -- # config=() 00:28:35.551 21:33:50 -- nvmf/common.sh@521 -- # local subsystem config 00:28:35.551 21:33:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:35.551 21:33:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:35.551 { 00:28:35.551 "params": { 00:28:35.551 "name": "Nvme$subsystem", 00:28:35.551 "trtype": "$TEST_TRANSPORT", 00:28:35.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.551 "adrfam": "ipv4", 00:28:35.551 "trsvcid": "$NVMF_PORT", 00:28:35.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.551 "hdgst": ${hdgst:-false}, 00:28:35.551 "ddgst": ${ddgst:-false} 00:28:35.551 }, 00:28:35.551 "method": "bdev_nvme_attach_controller" 00:28:35.551 } 00:28:35.551 EOF 00:28:35.551 )") 00:28:35.551 21:33:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:35.551 21:33:50 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:35.552 21:33:50 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:35.552 21:33:50 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:35.552 21:33:50 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:35.552 21:33:50 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:35.552 21:33:50 -- common/autotest_common.sh@1327 -- # shift 00:28:35.552 21:33:50 -- target/dif.sh@82 -- # gen_fio_conf 00:28:35.552 21:33:50 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:35.552 21:33:50 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:35.552 21:33:50 -- target/dif.sh@54 -- # local file 00:28:35.552 21:33:50 -- target/dif.sh@56 -- # cat 00:28:35.552 21:33:50 -- nvmf/common.sh@543 -- # cat 00:28:35.552 21:33:50 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:35.552 21:33:50 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:35.552 21:33:50 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:35.552 21:33:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:35.552 21:33:50 -- target/dif.sh@72 -- # (( file <= files )) 00:28:35.552 21:33:50 -- target/dif.sh@73 -- # cat 00:28:35.552 21:33:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:35.552 21:33:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:35.552 { 00:28:35.552 "params": { 00:28:35.552 "name": "Nvme$subsystem", 00:28:35.552 "trtype": "$TEST_TRANSPORT", 00:28:35.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.552 "adrfam": "ipv4", 00:28:35.552 "trsvcid": "$NVMF_PORT", 00:28:35.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.552 "hdgst": ${hdgst:-false}, 00:28:35.552 "ddgst": ${ddgst:-false} 00:28:35.552 }, 00:28:35.552 "method": "bdev_nvme_attach_controller" 00:28:35.552 } 00:28:35.552 EOF 00:28:35.552 )") 00:28:35.552 21:33:50 -- target/dif.sh@72 -- # (( file++ )) 00:28:35.552 21:33:50 -- target/dif.sh@72 -- # (( file <= files )) 00:28:35.552 21:33:50 -- target/dif.sh@73 -- # cat 00:28:35.552 21:33:50 -- nvmf/common.sh@543 -- # cat 00:28:35.552 21:33:50 -- target/dif.sh@72 -- # (( file++ )) 00:28:35.552 21:33:50 -- target/dif.sh@72 -- # (( file <= files )) 00:28:35.552 21:33:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:35.552 21:33:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:35.552 { 00:28:35.552 "params": { 00:28:35.552 "name": "Nvme$subsystem", 00:28:35.552 "trtype": "$TEST_TRANSPORT", 00:28:35.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.552 "adrfam": "ipv4", 00:28:35.552 "trsvcid": "$NVMF_PORT", 00:28:35.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.552 "hdgst": ${hdgst:-false}, 00:28:35.552 "ddgst": ${ddgst:-false} 00:28:35.552 }, 00:28:35.552 "method": "bdev_nvme_attach_controller" 00:28:35.552 } 00:28:35.552 EOF 00:28:35.552 )") 00:28:35.552 21:33:50 -- nvmf/common.sh@543 -- # cat 00:28:35.552 21:33:50 -- nvmf/common.sh@545 -- # jq . 00:28:35.552 21:33:50 -- nvmf/common.sh@546 -- # IFS=, 00:28:35.552 21:33:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:35.552 "params": { 00:28:35.552 "name": "Nvme0", 00:28:35.552 "trtype": "tcp", 00:28:35.552 "traddr": "10.0.0.2", 00:28:35.552 "adrfam": "ipv4", 00:28:35.552 "trsvcid": "4420", 00:28:35.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.552 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:35.552 "hdgst": false, 00:28:35.552 "ddgst": false 00:28:35.552 }, 00:28:35.552 "method": "bdev_nvme_attach_controller" 00:28:35.552 },{ 00:28:35.552 "params": { 00:28:35.552 "name": "Nvme1", 00:28:35.552 "trtype": "tcp", 00:28:35.552 "traddr": "10.0.0.2", 00:28:35.552 "adrfam": "ipv4", 00:28:35.552 "trsvcid": "4420", 00:28:35.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:35.552 "hdgst": false, 00:28:35.552 "ddgst": false 00:28:35.552 }, 00:28:35.552 "method": "bdev_nvme_attach_controller" 00:28:35.552 },{ 00:28:35.552 "params": { 00:28:35.552 "name": "Nvme2", 00:28:35.552 "trtype": "tcp", 00:28:35.552 "traddr": "10.0.0.2", 00:28:35.552 "adrfam": "ipv4", 00:28:35.552 "trsvcid": "4420", 00:28:35.552 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:35.552 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:35.552 "hdgst": false, 00:28:35.552 "ddgst": false 00:28:35.552 }, 00:28:35.552 "method": "bdev_nvme_attach_controller" 00:28:35.552 }' 00:28:35.552 21:33:50 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:35.552 21:33:50 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:35.552 21:33:50 -- common/autotest_common.sh@1333 -- # break 00:28:35.552 21:33:50 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:35.552 21:33:50 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:35.810 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:35.810 ... 00:28:35.810 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:35.810 ... 00:28:35.810 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:35.810 ... 00:28:35.810 fio-3.35 00:28:35.810 Starting 24 threads 00:28:35.810 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.020 00:28:48.020 filename0: (groupid=0, jobs=1): err= 0: pid=1395128: Wed Apr 24 21:34:01 2024 00:28:48.020 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.5MiB/10017msec) 00:28:48.020 slat (nsec): min=5197, max=71783, avg=10641.13, stdev=5621.75 00:28:48.020 clat (usec): min=13654, max=45003, avg=32075.25, stdev=2667.47 00:28:48.020 lat (usec): min=13664, max=45025, avg=32085.89, stdev=2667.02 00:28:48.020 clat percentiles (usec): 00:28:48.020 | 1.00th=[20579], 5.00th=[25560], 10.00th=[32375], 20.00th=[32375], 00:28:48.020 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:48.020 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.020 | 99.00th=[33817], 99.50th=[38011], 99.90th=[44303], 99.95th=[44827], 00:28:48.020 | 99.99th=[44827] 00:28:48.020 bw ( KiB/s): min= 1912, max= 2536, per=4.23%, avg=1986.40, stdev=141.36, samples=20 00:28:48.020 iops : min= 478, max= 634, avg=496.60, stdev=35.34, samples=20 00:28:48.020 lat (msec) : 20=0.76%, 50=99.24% 00:28:48.020 cpu : usr=98.84%, sys=0.72%, ctx=55, majf=0, minf=1636 00:28:48.020 IO depths : 1=5.7%, 2=11.5%, 4=23.7%, 8=52.2%, 16=6.8%, 32=0.0%, >=64=0.0% 00:28:48.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.020 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.020 issued rwts: total=4983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.020 filename0: (groupid=0, jobs=1): err= 0: pid=1395129: Wed Apr 24 21:34:01 2024 00:28:48.020 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10014msec) 00:28:48.020 slat (usec): min=5, max=102, avg=32.40, stdev=22.72 00:28:48.020 clat (usec): min=15449, max=54270, avg=32475.39, stdev=1635.94 00:28:48.020 lat (usec): min=15457, max=54295, avg=32507.79, stdev=1634.74 00:28:48.020 clat percentiles (usec): 00:28:48.020 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:28:48.020 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:28:48.020 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.020 | 99.00th=[33817], 99.50th=[34866], 99.90th=[54264], 99.95th=[54264], 00:28:48.020 | 99.99th=[54264] 00:28:48.020 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1947.11, stdev=68.14, samples=19 00:28:48.020 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:28:48.020 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:28:48.020 cpu : usr=99.01%, sys=0.64%, ctx=14, majf=0, minf=1633 00:28:48.020 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:48.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.020 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.020 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.020 filename0: (groupid=0, jobs=1): err= 0: pid=1395130: Wed Apr 24 21:34:01 2024 00:28:48.020 read: IOPS=487, BW=1950KiB/s (1996kB/s)(19.1MiB/10012msec) 00:28:48.020 slat (nsec): min=6779, max=53115, avg=15443.46, stdev=8174.17 00:28:48.020 clat (usec): min=19530, max=59461, avg=32683.51, stdev=1905.09 00:28:48.020 lat (usec): min=19539, max=59492, avg=32698.95, stdev=1905.00 00:28:48.020 clat percentiles (usec): 00:28:48.020 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:28:48.020 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:28:48.020 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.020 | 99.00th=[33817], 99.50th=[44303], 99.90th=[59507], 99.95th=[59507], 00:28:48.020 | 99.99th=[59507] 00:28:48.020 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1944.60, stdev=67.51, samples=20 00:28:48.020 iops : min= 448, max= 512, avg=486.15, stdev=16.88, samples=20 00:28:48.020 lat (msec) : 20=0.04%, 50=99.63%, 100=0.33% 00:28:48.020 cpu : usr=98.32%, sys=0.99%, ctx=61, majf=0, minf=1633 00:28:48.020 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:48.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.020 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.020 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.020 filename0: (groupid=0, jobs=1): err= 0: pid=1395131: Wed Apr 24 21:34:01 2024 00:28:48.020 read: IOPS=487, BW=1948KiB/s (1995kB/s)(19.1MiB/10019msec) 00:28:48.020 slat (nsec): min=4274, max=83633, avg=14720.22, stdev=11853.57 00:28:48.020 clat (usec): min=25117, max=68478, avg=32738.24, stdev=2081.66 00:28:48.020 lat (usec): min=25126, max=68501, avg=32752.96, stdev=2080.85 00:28:48.020 clat percentiles (usec): 00:28:48.020 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:28:48.020 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:48.020 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:28:48.020 | 99.00th=[33817], 99.50th=[34866], 99.90th=[68682], 99.95th=[68682], 00:28:48.020 | 99.99th=[68682] 00:28:48.020 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1945.60, stdev=66.96, samples=20 00:28:48.020 iops : min= 448, max= 512, avg=486.40, stdev=16.74, samples=20 00:28:48.020 lat (msec) : 50=99.67%, 100=0.33% 00:28:48.020 cpu : usr=98.85%, sys=0.70%, ctx=63, majf=0, minf=1634 00:28:48.020 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:48.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.020 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.020 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.020 filename0: (groupid=0, jobs=1): err= 0: pid=1395132: Wed Apr 24 21:34:01 2024 00:28:48.020 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10003msec) 00:28:48.020 slat (usec): min=5, max=117, avg=31.76, stdev=13.55 00:28:48.020 clat (usec): min=24837, max=53049, avg=32504.69, stdev=1257.87 00:28:48.020 lat (usec): min=24847, max=53078, avg=32536.45, stdev=1257.06 00:28:48.020 clat percentiles (usec): 00:28:48.020 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:28:48.020 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:28:48.020 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.020 | 99.00th=[33817], 99.50th=[34341], 99.90th=[53216], 99.95th=[53216], 00:28:48.020 | 99.99th=[53216] 00:28:48.020 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1947.11, stdev=68.14, samples=19 00:28:48.020 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:28:48.020 lat (msec) : 50=99.67%, 100=0.33% 00:28:48.020 cpu : usr=99.08%, sys=0.54%, ctx=15, majf=0, minf=1636 00:28:48.020 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:28:48.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.020 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.020 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.020 filename0: (groupid=0, jobs=1): err= 0: pid=1395133: Wed Apr 24 21:34:01 2024 00:28:48.020 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10009msec) 00:28:48.021 slat (nsec): min=5699, max=87693, avg=32689.94, stdev=16756.37 00:28:48.021 clat (usec): min=14975, max=49266, avg=32427.19, stdev=1433.95 00:28:48.021 lat (usec): min=15021, max=49293, avg=32459.88, stdev=1433.07 00:28:48.021 clat percentiles (usec): 00:28:48.021 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:28:48.021 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:28:48.021 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.021 | 99.00th=[33817], 99.50th=[34866], 99.90th=[49021], 99.95th=[49021], 00:28:48.021 | 99.99th=[49021] 00:28:48.021 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1947.11, stdev=68.14, samples=19 00:28:48.021 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:28:48.021 lat (msec) : 20=0.33%, 50=99.67% 00:28:48.021 cpu : usr=99.00%, sys=0.66%, ctx=19, majf=0, minf=1636 00:28:48.021 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:48.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.021 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.021 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.021 filename0: (groupid=0, jobs=1): err= 0: pid=1395134: Wed Apr 24 21:34:01 2024 00:28:48.021 read: IOPS=494, BW=1978KiB/s (2026kB/s)(19.3MiB/10008msec) 00:28:48.021 slat (usec): min=5, max=107, avg=19.94, stdev=14.29 00:28:48.021 clat (usec): min=12470, max=58553, avg=32265.68, stdev=3018.59 00:28:48.021 lat (usec): min=12478, max=58578, avg=32285.62, stdev=3019.16 00:28:48.021 clat percentiles (usec): 00:28:48.021 | 1.00th=[19792], 5.00th=[26870], 10.00th=[32375], 20.00th=[32375], 00:28:48.021 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:48.021 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:28:48.021 | 99.00th=[38536], 99.50th=[49546], 99.90th=[58459], 99.95th=[58459], 00:28:48.021 | 99.99th=[58459] 00:28:48.021 bw ( KiB/s): min= 1795, max= 2144, per=4.20%, avg=1972.37, stdev=68.87, samples=19 00:28:48.021 iops : min= 448, max= 536, avg=493.05, stdev=17.33, samples=19 00:28:48.021 lat (msec) : 20=1.09%, 50=98.59%, 100=0.32% 00:28:48.021 cpu : usr=98.72%, sys=0.82%, ctx=55, majf=0, minf=1635 00:28:48.021 IO depths : 1=0.7%, 2=1.4%, 4=3.5%, 8=77.5%, 16=16.9%, 32=0.0%, >=64=0.0% 00:28:48.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.021 complete : 0=0.0%, 4=89.9%, 8=9.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.021 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.021 filename0: (groupid=0, jobs=1): err= 0: pid=1395135: Wed Apr 24 21:34:01 2024 00:28:48.021 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10004msec) 00:28:48.021 slat (nsec): min=5229, max=81363, avg=30796.01, stdev=13729.52 00:28:48.021 clat (usec): min=24400, max=53028, avg=32509.00, stdev=1299.69 00:28:48.021 lat (usec): min=24408, max=53054, avg=32539.80, stdev=1298.97 00:28:48.021 clat percentiles (usec): 00:28:48.021 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:28:48.021 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:28:48.021 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.021 | 99.00th=[34341], 99.50th=[36439], 99.90th=[53216], 99.95th=[53216], 00:28:48.021 | 99.99th=[53216] 00:28:48.021 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1947.11, stdev=66.67, samples=19 00:28:48.021 iops : min= 448, max= 512, avg=486.74, stdev=16.76, samples=19 00:28:48.021 lat (msec) : 50=99.67%, 100=0.33% 00:28:48.021 cpu : usr=98.71%, sys=0.82%, ctx=108, majf=0, minf=1635 00:28:48.021 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:28:48.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.021 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.021 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.021 filename1: (groupid=0, jobs=1): err= 0: pid=1395136: Wed Apr 24 21:34:01 2024 00:28:48.021 read: IOPS=488, BW=1953KiB/s (2000kB/s)(19.1MiB/10026msec) 00:28:48.021 slat (nsec): min=4705, max=85808, avg=16323.05, stdev=12132.68 00:28:48.021 clat (usec): min=19271, max=61814, avg=32628.11, stdev=2172.10 00:28:48.021 lat (usec): min=19280, max=61833, avg=32644.44, stdev=2171.58 00:28:48.021 clat percentiles (usec): 00:28:48.021 | 1.00th=[26870], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:48.021 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:48.021 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:28:48.021 | 99.00th=[34341], 99.50th=[46924], 99.90th=[61604], 99.95th=[61604], 00:28:48.021 | 99.99th=[61604] 00:28:48.021 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1952.00, stdev=70.42, samples=20 00:28:48.021 iops : min= 448, max= 512, avg=488.00, stdev=17.60, samples=20 00:28:48.021 lat (msec) : 20=0.25%, 50=99.43%, 100=0.33% 00:28:48.021 cpu : usr=98.84%, sys=0.71%, ctx=79, majf=0, minf=1639 00:28:48.021 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:48.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.021 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.021 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.021 filename1: (groupid=0, jobs=1): err= 0: pid=1395137: Wed Apr 24 21:34:01 2024 00:28:48.021 read: IOPS=493, BW=1975KiB/s (2023kB/s)(19.3MiB/10008msec) 00:28:48.021 slat (nsec): min=5387, max=79610, avg=19842.22, stdev=15003.68 00:28:48.021 clat (usec): min=13906, max=51225, avg=32302.96, stdev=2646.59 00:28:48.021 lat (usec): min=13915, max=51244, avg=32322.80, stdev=2647.58 00:28:48.021 clat percentiles (usec): 00:28:48.021 | 1.00th=[20579], 5.00th=[29230], 10.00th=[32113], 20.00th=[32375], 00:28:48.021 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:48.021 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.021 | 99.00th=[39060], 99.50th=[47973], 99.90th=[51119], 99.95th=[51119], 00:28:48.021 | 99.99th=[51119] 00:28:48.021 bw ( KiB/s): min= 1792, max= 2160, per=4.19%, avg=1966.32, stdev=70.33, samples=19 00:28:48.021 iops : min= 448, max= 540, avg=491.58, stdev=17.58, samples=19 00:28:48.021 lat (msec) : 20=0.77%, 50=99.03%, 100=0.20% 00:28:48.021 cpu : usr=99.07%, sys=0.56%, ctx=14, majf=0, minf=1636 00:28:48.021 IO depths : 1=0.8%, 2=1.8%, 4=4.3%, 8=76.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:28:48.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.021 complete : 0=0.0%, 4=90.1%, 8=8.8%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.021 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.021 filename1: (groupid=0, jobs=1): err= 0: pid=1395138: Wed Apr 24 21:34:01 2024 00:28:48.021 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10014msec) 00:28:48.021 slat (nsec): min=5208, max=96758, avg=35037.41, stdev=22183.20 00:28:48.021 clat (usec): min=14904, max=54194, avg=32427.95, stdev=1703.35 00:28:48.021 lat (usec): min=14918, max=54219, avg=32462.99, stdev=1701.91 00:28:48.021 clat percentiles (usec): 00:28:48.021 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:28:48.021 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:28:48.021 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.021 | 99.00th=[33817], 99.50th=[34866], 99.90th=[54264], 99.95th=[54264], 00:28:48.021 | 99.99th=[54264] 00:28:48.021 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1947.11, stdev=68.14, samples=19 00:28:48.021 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:28:48.021 lat (msec) : 20=0.37%, 50=99.31%, 100=0.33% 00:28:48.021 cpu : usr=99.06%, sys=0.57%, ctx=13, majf=0, minf=1636 00:28:48.021 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:48.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.021 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.021 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.021 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.021 filename1: (groupid=0, jobs=1): err= 0: pid=1395139: Wed Apr 24 21:34:01 2024 00:28:48.021 read: IOPS=487, BW=1948KiB/s (1995kB/s)(19.1MiB/10019msec) 00:28:48.021 slat (nsec): min=3992, max=83535, avg=29417.70, stdev=15152.38 00:28:48.021 clat (usec): min=24923, max=68653, avg=32618.07, stdev=2136.04 00:28:48.022 lat (usec): min=24932, max=68674, avg=32647.48, stdev=2134.39 00:28:48.022 clat percentiles (usec): 00:28:48.022 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:28:48.022 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:28:48.022 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.022 | 99.00th=[33817], 99.50th=[34866], 99.90th=[68682], 99.95th=[68682], 00:28:48.022 | 99.99th=[68682] 00:28:48.022 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1945.60, stdev=66.96, samples=20 00:28:48.022 iops : min= 448, max= 512, avg=486.40, stdev=16.74, samples=20 00:28:48.022 lat (msec) : 50=99.67%, 100=0.33% 00:28:48.022 cpu : usr=98.98%, sys=0.61%, ctx=50, majf=0, minf=1633 00:28:48.022 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:48.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.022 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.022 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.022 filename1: (groupid=0, jobs=1): err= 0: pid=1395140: Wed Apr 24 21:34:01 2024 00:28:48.022 read: IOPS=491, BW=1968KiB/s (2015kB/s)(19.2MiB/10010msec) 00:28:48.022 slat (nsec): min=5659, max=80051, avg=30368.06, stdev=14628.30 00:28:48.022 clat (usec): min=13730, max=50245, avg=32237.55, stdev=2726.04 00:28:48.022 lat (usec): min=13739, max=50273, avg=32267.92, stdev=2728.25 00:28:48.022 clat percentiles (usec): 00:28:48.022 | 1.00th=[21103], 5.00th=[31065], 10.00th=[32113], 20.00th=[32113], 00:28:48.022 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:28:48.022 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.022 | 99.00th=[44303], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:28:48.022 | 99.99th=[50070] 00:28:48.022 bw ( KiB/s): min= 1792, max= 2224, per=4.19%, avg=1965.47, stdev=96.63, samples=19 00:28:48.022 iops : min= 448, max= 556, avg=491.37, stdev=24.16, samples=19 00:28:48.022 lat (msec) : 20=0.41%, 50=99.11%, 100=0.49% 00:28:48.022 cpu : usr=99.05%, sys=0.58%, ctx=15, majf=0, minf=1634 00:28:48.022 IO depths : 1=5.0%, 2=10.9%, 4=24.1%, 8=52.5%, 16=7.5%, 32=0.0%, >=64=0.0% 00:28:48.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.022 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.022 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.022 filename1: (groupid=0, jobs=1): err= 0: pid=1395141: Wed Apr 24 21:34:01 2024 00:28:48.022 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10001msec) 00:28:48.022 slat (nsec): min=5273, max=86023, avg=10837.49, stdev=5510.24 00:28:48.022 clat (usec): min=3146, max=35377, avg=32370.15, stdev=2390.45 00:28:48.022 lat (usec): min=3158, max=35396, avg=32380.99, stdev=2390.30 00:28:48.022 clat percentiles (usec): 00:28:48.022 | 1.00th=[25297], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:28:48.022 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:48.022 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:28:48.022 | 99.00th=[33424], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:28:48.022 | 99.99th=[35390] 00:28:48.022 bw ( KiB/s): min= 1920, max= 2176, per=4.19%, avg=1967.16, stdev=76.45, samples=19 00:28:48.022 iops : min= 480, max= 544, avg=491.79, stdev=19.11, samples=19 00:28:48.022 lat (msec) : 4=0.04%, 10=0.61%, 20=0.32%, 50=99.03% 00:28:48.022 cpu : usr=98.98%, sys=0.65%, ctx=16, majf=0, minf=1637 00:28:48.022 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:48.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.022 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.022 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.022 filename1: (groupid=0, jobs=1): err= 0: pid=1395142: Wed Apr 24 21:34:01 2024 00:28:48.022 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10003msec) 00:28:48.022 slat (nsec): min=5763, max=96480, avg=19986.47, stdev=17822.71 00:28:48.022 clat (usec): min=16995, max=58799, avg=32650.07, stdev=1746.71 00:28:48.022 lat (usec): min=17010, max=58830, avg=32670.06, stdev=1745.60 00:28:48.022 clat percentiles (usec): 00:28:48.022 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:48.022 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:48.022 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.022 | 99.00th=[33817], 99.50th=[34866], 99.90th=[58983], 99.95th=[58983], 00:28:48.022 | 99.99th=[58983] 00:28:48.022 bw ( KiB/s): min= 1776, max= 2048, per=4.15%, avg=1946.95, stdev=70.36, samples=19 00:28:48.022 iops : min= 444, max= 512, avg=486.74, stdev=17.59, samples=19 00:28:48.022 lat (msec) : 20=0.12%, 50=99.55%, 100=0.33% 00:28:48.022 cpu : usr=89.45%, sys=4.89%, ctx=169, majf=0, minf=1633 00:28:48.022 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:48.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.022 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.022 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.022 filename1: (groupid=0, jobs=1): err= 0: pid=1395143: Wed Apr 24 21:34:01 2024 00:28:48.022 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10011msec) 00:28:48.022 slat (nsec): min=5249, max=53539, avg=15886.10, stdev=8757.54 00:28:48.022 clat (usec): min=31717, max=58407, avg=32675.87, stdev=1500.00 00:28:48.022 lat (usec): min=31725, max=58433, avg=32691.75, stdev=1499.51 00:28:48.022 clat percentiles (usec): 00:28:48.022 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:28:48.022 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:28:48.022 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.022 | 99.00th=[33817], 99.50th=[33817], 99.90th=[58459], 99.95th=[58459], 00:28:48.022 | 99.99th=[58459] 00:28:48.022 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=1944.75, stdev=67.16, samples=20 00:28:48.022 iops : min= 448, max= 512, avg=486.15, stdev=16.88, samples=20 00:28:48.022 lat (msec) : 50=99.67%, 100=0.33% 00:28:48.022 cpu : usr=95.58%, sys=2.26%, ctx=78, majf=0, minf=1637 00:28:48.022 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:48.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.022 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.022 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.022 filename2: (groupid=0, jobs=1): err= 0: pid=1395144: Wed Apr 24 21:34:01 2024 00:28:48.022 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10023msec) 00:28:48.022 slat (nsec): min=5679, max=92733, avg=22327.31, stdev=15604.47 00:28:48.022 clat (usec): min=18776, max=59436, avg=32607.02, stdev=1918.48 00:28:48.022 lat (usec): min=18784, max=59463, avg=32629.34, stdev=1917.82 00:28:48.022 clat percentiles (usec): 00:28:48.022 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:28:48.022 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:28:48.022 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.022 | 99.00th=[34341], 99.50th=[49021], 99.90th=[59507], 99.95th=[59507], 00:28:48.022 | 99.99th=[59507] 00:28:48.022 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1946.60, stdev=66.13, samples=20 00:28:48.022 iops : min= 448, max= 512, avg=486.75, stdev=16.51, samples=20 00:28:48.022 lat (msec) : 20=0.08%, 50=99.55%, 100=0.37% 00:28:48.022 cpu : usr=98.78%, sys=0.83%, ctx=22, majf=0, minf=1636 00:28:48.022 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:48.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.022 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.022 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.022 filename2: (groupid=0, jobs=1): err= 0: pid=1395145: Wed Apr 24 21:34:01 2024 00:28:48.022 read: IOPS=487, BW=1948KiB/s (1995kB/s)(19.1MiB/10019msec) 00:28:48.022 slat (nsec): min=4514, max=84595, avg=32564.60, stdev=14696.33 00:28:48.022 clat (usec): min=25113, max=68599, avg=32567.75, stdev=2101.42 00:28:48.022 lat (usec): min=25121, max=68625, avg=32600.31, stdev=2099.98 00:28:48.022 clat percentiles (usec): 00:28:48.022 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:28:48.022 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:28:48.023 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.023 | 99.00th=[33817], 99.50th=[34866], 99.90th=[68682], 99.95th=[68682], 00:28:48.023 | 99.99th=[68682] 00:28:48.023 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1945.60, stdev=66.96, samples=20 00:28:48.023 iops : min= 448, max= 512, avg=486.40, stdev=16.74, samples=20 00:28:48.023 lat (msec) : 50=99.67%, 100=0.33% 00:28:48.023 cpu : usr=99.08%, sys=0.54%, ctx=16, majf=0, minf=1636 00:28:48.023 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:48.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.023 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.023 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.023 filename2: (groupid=0, jobs=1): err= 0: pid=1395146: Wed Apr 24 21:34:01 2024 00:28:48.023 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10003msec) 00:28:48.023 slat (usec): min=4, max=104, avg=10.18, stdev= 4.34 00:28:48.023 clat (usec): min=2952, max=48189, avg=32283.63, stdev=2956.58 00:28:48.023 lat (usec): min=2961, max=48197, avg=32293.80, stdev=2956.66 00:28:48.023 clat percentiles (usec): 00:28:48.023 | 1.00th=[13304], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:28:48.023 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:28:48.023 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:28:48.023 | 99.00th=[33424], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:28:48.023 | 99.99th=[47973] 00:28:48.023 bw ( KiB/s): min= 1920, max= 2304, per=4.20%, avg=1973.89, stdev=98.37, samples=19 00:28:48.023 iops : min= 480, max= 576, avg=493.47, stdev=24.59, samples=19 00:28:48.023 lat (msec) : 4=0.32%, 10=0.65%, 20=0.36%, 50=98.67% 00:28:48.023 cpu : usr=98.98%, sys=0.66%, ctx=18, majf=0, minf=1637 00:28:48.023 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:48.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.023 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.023 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.023 filename2: (groupid=0, jobs=1): err= 0: pid=1395147: Wed Apr 24 21:34:01 2024 00:28:48.023 read: IOPS=486, BW=1947KiB/s (1994kB/s)(19.1MiB/10025msec) 00:28:48.023 slat (nsec): min=3792, max=69607, avg=29012.11, stdev=11953.77 00:28:48.023 clat (usec): min=25571, max=74573, avg=32609.65, stdev=2444.21 00:28:48.023 lat (usec): min=25579, max=74605, avg=32638.66, stdev=2442.85 00:28:48.023 clat percentiles (usec): 00:28:48.023 | 1.00th=[32113], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:28:48.023 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:28:48.023 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.023 | 99.00th=[33817], 99.50th=[34341], 99.90th=[74974], 99.95th=[74974], 00:28:48.023 | 99.99th=[74974] 00:28:48.023 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1945.75, stdev=88.57, samples=20 00:28:48.023 iops : min= 416, max= 512, avg=486.40, stdev=22.27, samples=20 00:28:48.023 lat (msec) : 50=99.67%, 100=0.33% 00:28:48.023 cpu : usr=98.80%, sys=0.76%, ctx=73, majf=0, minf=1637 00:28:48.023 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:48.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.023 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.023 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.023 filename2: (groupid=0, jobs=1): err= 0: pid=1395148: Wed Apr 24 21:34:01 2024 00:28:48.023 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10007msec) 00:28:48.023 slat (usec): min=5, max=102, avg=34.18, stdev=20.84 00:28:48.023 clat (usec): min=14941, max=47509, avg=32389.13, stdev=1374.20 00:28:48.023 lat (usec): min=14953, max=47535, avg=32423.31, stdev=1373.76 00:28:48.023 clat percentiles (usec): 00:28:48.023 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:28:48.023 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:28:48.023 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.023 | 99.00th=[33817], 99.50th=[34866], 99.90th=[47449], 99.95th=[47449], 00:28:48.023 | 99.99th=[47449] 00:28:48.023 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1947.11, stdev=68.14, samples=19 00:28:48.023 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:28:48.023 lat (msec) : 20=0.33%, 50=99.67% 00:28:48.023 cpu : usr=98.93%, sys=0.69%, ctx=49, majf=0, minf=1635 00:28:48.023 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:48.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.023 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.023 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.023 filename2: (groupid=0, jobs=1): err= 0: pid=1395149: Wed Apr 24 21:34:01 2024 00:28:48.023 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10015msec) 00:28:48.023 slat (nsec): min=5423, max=99668, avg=34439.90, stdev=20628.04 00:28:48.023 clat (usec): min=14953, max=55166, avg=32407.39, stdev=1717.69 00:28:48.023 lat (usec): min=14965, max=55193, avg=32441.83, stdev=1717.01 00:28:48.023 clat percentiles (usec): 00:28:48.023 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:28:48.023 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:28:48.023 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.023 | 99.00th=[33817], 99.50th=[34866], 99.90th=[55313], 99.95th=[55313], 00:28:48.023 | 99.99th=[55313] 00:28:48.023 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1946.95, stdev=68.52, samples=19 00:28:48.023 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:28:48.023 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:28:48.023 cpu : usr=94.08%, sys=2.77%, ctx=107, majf=0, minf=1633 00:28:48.023 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:48.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.023 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.023 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.023 filename2: (groupid=0, jobs=1): err= 0: pid=1395150: Wed Apr 24 21:34:01 2024 00:28:48.023 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10004msec) 00:28:48.023 slat (usec): min=7, max=128, avg=29.38, stdev=11.56 00:28:48.023 clat (usec): min=24420, max=54065, avg=32533.22, stdev=1451.89 00:28:48.023 lat (usec): min=24441, max=54105, avg=32562.60, stdev=1451.08 00:28:48.023 clat percentiles (usec): 00:28:48.023 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:28:48.023 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:28:48.023 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:28:48.023 | 99.00th=[33817], 99.50th=[41157], 99.90th=[54264], 99.95th=[54264], 00:28:48.023 | 99.99th=[54264] 00:28:48.023 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1946.95, stdev=68.73, samples=19 00:28:48.023 iops : min= 448, max= 512, avg=486.74, stdev=17.18, samples=19 00:28:48.023 lat (msec) : 50=99.67%, 100=0.33% 00:28:48.023 cpu : usr=98.62%, sys=0.89%, ctx=131, majf=0, minf=1633 00:28:48.023 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:48.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.023 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.023 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.023 filename2: (groupid=0, jobs=1): err= 0: pid=1395151: Wed Apr 24 21:34:01 2024 00:28:48.023 read: IOPS=487, BW=1948KiB/s (1995kB/s)(19.1MiB/10018msec) 00:28:48.023 slat (nsec): min=5280, max=69403, avg=20926.92, stdev=13447.15 00:28:48.023 clat (usec): min=23809, max=67443, avg=32689.99, stdev=2121.24 00:28:48.023 lat (usec): min=23818, max=67469, avg=32710.92, stdev=2120.15 00:28:48.023 clat percentiles (usec): 00:28:48.023 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:28:48.023 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:28:48.023 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:28:48.023 | 99.00th=[34341], 99.50th=[40109], 99.90th=[67634], 99.95th=[67634], 00:28:48.023 | 99.99th=[67634] 00:28:48.024 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1945.75, stdev=66.60, samples=20 00:28:48.024 iops : min= 448, max= 512, avg=486.40, stdev=16.74, samples=20 00:28:48.024 lat (msec) : 50=99.67%, 100=0.33% 00:28:48.024 cpu : usr=98.79%, sys=0.77%, ctx=122, majf=0, minf=1635 00:28:48.024 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:48.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.024 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:48.024 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:48.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:48.024 00:28:48.024 Run status group 0 (all jobs): 00:28:48.024 READ: bw=45.8MiB/s (48.0MB/s), 1947KiB/s-1990KiB/s (1994kB/s-2038kB/s), io=459MiB (482MB), run=10001-10026msec 00:28:48.024 ----------------------------------------------------- 00:28:48.024 Suppressions used: 00:28:48.024 count bytes template 00:28:48.024 45 402 /usr/src/fio/parse.c 00:28:48.024 1 8 libtcmalloc_minimal.so 00:28:48.024 1 904 libcrypto.so 00:28:48.024 ----------------------------------------------------- 00:28:48.024 00:28:48.024 21:34:02 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:48.024 21:34:02 -- target/dif.sh@43 -- # local sub 00:28:48.024 21:34:02 -- target/dif.sh@45 -- # for sub in "$@" 00:28:48.024 21:34:02 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:48.024 21:34:02 -- target/dif.sh@36 -- # local sub_id=0 00:28:48.024 21:34:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@45 -- # for sub in "$@" 00:28:48.024 21:34:02 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:48.024 21:34:02 -- target/dif.sh@36 -- # local sub_id=1 00:28:48.024 21:34:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@45 -- # for sub in "$@" 00:28:48.024 21:34:02 -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:48.024 21:34:02 -- target/dif.sh@36 -- # local sub_id=2 00:28:48.024 21:34:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@115 -- # NULL_DIF=1 00:28:48.024 21:34:02 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:48.024 21:34:02 -- target/dif.sh@115 -- # numjobs=2 00:28:48.024 21:34:02 -- target/dif.sh@115 -- # iodepth=8 00:28:48.024 21:34:02 -- target/dif.sh@115 -- # runtime=5 00:28:48.024 21:34:02 -- target/dif.sh@115 -- # files=1 00:28:48.024 21:34:02 -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:48.024 21:34:02 -- target/dif.sh@28 -- # local sub 00:28:48.024 21:34:02 -- target/dif.sh@30 -- # for sub in "$@" 00:28:48.024 21:34:02 -- target/dif.sh@31 -- # create_subsystem 0 00:28:48.024 21:34:02 -- target/dif.sh@18 -- # local sub_id=0 00:28:48.024 21:34:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 bdev_null0 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 [2024-04-24 21:34:02.714375] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@30 -- # for sub in "$@" 00:28:48.024 21:34:02 -- target/dif.sh@31 -- # create_subsystem 1 00:28:48.024 21:34:02 -- target/dif.sh@18 -- # local sub_id=1 00:28:48.024 21:34:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 bdev_null1 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.024 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:48.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:28:48.024 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:48.024 21:34:02 -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:48.024 21:34:02 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:48.024 21:34:02 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:48.024 21:34:02 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:48.024 21:34:02 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:48.024 21:34:02 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:48.024 21:34:02 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:48.024 21:34:02 -- common/autotest_common.sh@1327 -- # shift 00:28:48.024 21:34:02 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:48.024 21:34:02 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:48.024 21:34:02 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:48.024 21:34:02 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:48.024 21:34:02 -- nvmf/common.sh@521 -- # config=() 00:28:48.024 21:34:02 -- nvmf/common.sh@521 -- # local subsystem config 00:28:48.024 21:34:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:48.024 21:34:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:48.024 { 00:28:48.024 "params": { 00:28:48.024 "name": "Nvme$subsystem", 00:28:48.024 "trtype": "$TEST_TRANSPORT", 00:28:48.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.024 "adrfam": "ipv4", 00:28:48.024 "trsvcid": "$NVMF_PORT", 00:28:48.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.025 "hdgst": ${hdgst:-false}, 00:28:48.025 "ddgst": ${ddgst:-false} 00:28:48.025 }, 00:28:48.025 "method": "bdev_nvme_attach_controller" 00:28:48.025 } 00:28:48.025 EOF 00:28:48.025 )") 00:28:48.025 21:34:02 -- target/dif.sh@82 -- # gen_fio_conf 00:28:48.025 21:34:02 -- target/dif.sh@54 -- # local file 00:28:48.025 21:34:02 -- target/dif.sh@56 -- # cat 00:28:48.025 21:34:02 -- nvmf/common.sh@543 -- # cat 00:28:48.025 21:34:02 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:48.025 21:34:02 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:48.025 21:34:02 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:48.025 21:34:02 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:48.025 21:34:02 -- target/dif.sh@72 -- # (( file <= files )) 00:28:48.025 21:34:02 -- target/dif.sh@73 -- # cat 00:28:48.025 21:34:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:48.025 21:34:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:48.025 { 00:28:48.025 "params": { 00:28:48.025 "name": "Nvme$subsystem", 00:28:48.025 "trtype": "$TEST_TRANSPORT", 00:28:48.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.025 "adrfam": "ipv4", 00:28:48.025 "trsvcid": "$NVMF_PORT", 00:28:48.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.025 "hdgst": ${hdgst:-false}, 00:28:48.025 "ddgst": ${ddgst:-false} 00:28:48.025 }, 00:28:48.025 "method": "bdev_nvme_attach_controller" 00:28:48.025 } 00:28:48.025 EOF 00:28:48.025 )") 00:28:48.025 21:34:02 -- target/dif.sh@72 -- # (( file++ )) 00:28:48.025 21:34:02 -- nvmf/common.sh@543 -- # cat 00:28:48.025 21:34:02 -- target/dif.sh@72 -- # (( file <= files )) 00:28:48.025 21:34:02 -- nvmf/common.sh@545 -- # jq . 00:28:48.025 21:34:02 -- nvmf/common.sh@546 -- # IFS=, 00:28:48.025 21:34:02 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:48.025 "params": { 00:28:48.025 "name": "Nvme0", 00:28:48.025 "trtype": "tcp", 00:28:48.025 "traddr": "10.0.0.2", 00:28:48.025 "adrfam": "ipv4", 00:28:48.025 "trsvcid": "4420", 00:28:48.025 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:48.025 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:48.025 "hdgst": false, 00:28:48.025 "ddgst": false 00:28:48.025 }, 00:28:48.025 "method": "bdev_nvme_attach_controller" 00:28:48.025 },{ 00:28:48.025 "params": { 00:28:48.025 "name": "Nvme1", 00:28:48.025 "trtype": "tcp", 00:28:48.025 "traddr": "10.0.0.2", 00:28:48.025 "adrfam": "ipv4", 00:28:48.025 "trsvcid": "4420", 00:28:48.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:48.025 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:48.025 "hdgst": false, 00:28:48.025 "ddgst": false 00:28:48.025 }, 00:28:48.025 "method": "bdev_nvme_attach_controller" 00:28:48.025 }' 00:28:48.025 21:34:02 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:48.025 21:34:02 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:48.025 21:34:02 -- common/autotest_common.sh@1333 -- # break 00:28:48.025 21:34:02 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:48.025 21:34:02 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:48.284 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:48.284 ... 00:28:48.284 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:48.284 ... 00:28:48.284 fio-3.35 00:28:48.284 Starting 4 threads 00:28:48.541 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.126 00:28:55.126 filename0: (groupid=0, jobs=1): err= 0: pid=1397663: Wed Apr 24 21:34:08 2024 00:28:55.126 read: IOPS=2653, BW=20.7MiB/s (21.7MB/s)(104MiB/5003msec) 00:28:55.126 slat (nsec): min=5931, max=47883, avg=8731.18, stdev=4140.58 00:28:55.126 clat (usec): min=745, max=9550, avg=2988.78, stdev=481.94 00:28:55.126 lat (usec): min=758, max=9576, avg=2997.51, stdev=482.49 00:28:55.126 clat percentiles (usec): 00:28:55.126 | 1.00th=[ 1827], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2638], 00:28:55.126 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 3064], 60.00th=[ 3163], 00:28:55.126 | 70.00th=[ 3195], 80.00th=[ 3294], 90.00th=[ 3425], 95.00th=[ 3621], 00:28:55.126 | 99.00th=[ 4293], 99.50th=[ 4752], 99.90th=[ 5604], 99.95th=[ 9372], 00:28:55.126 | 99.99th=[ 9503] 00:28:55.126 bw ( KiB/s): min=19968, max=22848, per=26.38%, avg=21229.50, stdev=1063.91, samples=10 00:28:55.126 iops : min= 2496, max= 2856, avg=2653.60, stdev=133.10, samples=10 00:28:55.126 lat (usec) : 750=0.01%, 1000=0.04% 00:28:55.126 lat (msec) : 2=1.79%, 4=96.31%, 10=1.85% 00:28:55.126 cpu : usr=97.94%, sys=1.76%, ctx=9, majf=0, minf=1638 00:28:55.126 IO depths : 1=0.1%, 2=10.3%, 4=60.7%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.126 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.126 issued rwts: total=13274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.126 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:55.126 filename0: (groupid=0, jobs=1): err= 0: pid=1397664: Wed Apr 24 21:34:08 2024 00:28:55.126 read: IOPS=2551, BW=19.9MiB/s (20.9MB/s)(99.7MiB/5002msec) 00:28:55.126 slat (nsec): min=5944, max=52870, avg=9074.20, stdev=4504.14 00:28:55.126 clat (usec): min=724, max=6563, avg=3107.25, stdev=506.78 00:28:55.126 lat (usec): min=737, max=6589, avg=3116.33, stdev=506.97 00:28:55.126 clat percentiles (usec): 00:28:55.126 | 1.00th=[ 1909], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2737], 00:28:55.126 | 30.00th=[ 2900], 40.00th=[ 3064], 50.00th=[ 3130], 60.00th=[ 3195], 00:28:55.126 | 70.00th=[ 3261], 80.00th=[ 3359], 90.00th=[ 3654], 95.00th=[ 3916], 00:28:55.126 | 99.00th=[ 4686], 99.50th=[ 5145], 99.90th=[ 5800], 99.95th=[ 6194], 00:28:55.126 | 99.99th=[ 6521] 00:28:55.126 bw ( KiB/s): min=19776, max=21344, per=25.37%, avg=20411.20, stdev=549.98, samples=10 00:28:55.126 iops : min= 2472, max= 2668, avg=2551.40, stdev=68.75, samples=10 00:28:55.126 lat (usec) : 750=0.02%, 1000=0.07% 00:28:55.126 lat (msec) : 2=1.27%, 4=94.16%, 10=4.48% 00:28:55.126 cpu : usr=97.94%, sys=1.76%, ctx=9, majf=0, minf=1633 00:28:55.126 IO depths : 1=0.1%, 2=9.3%, 4=61.9%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.126 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.126 issued rwts: total=12762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.126 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:55.126 filename1: (groupid=0, jobs=1): err= 0: pid=1397665: Wed Apr 24 21:34:08 2024 00:28:55.126 read: IOPS=2386, BW=18.6MiB/s (19.5MB/s)(93.2MiB/5001msec) 00:28:55.126 slat (usec): min=5, max=139, avg=10.27, stdev= 4.50 00:28:55.126 clat (usec): min=658, max=47488, avg=3322.25, stdev=1254.70 00:28:55.126 lat (usec): min=669, max=47514, avg=3332.52, stdev=1254.61 00:28:55.126 clat percentiles (usec): 00:28:55.126 | 1.00th=[ 2147], 5.00th=[ 2638], 10.00th=[ 2835], 20.00th=[ 2999], 00:28:55.126 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3261], 00:28:55.126 | 70.00th=[ 3392], 80.00th=[ 3523], 90.00th=[ 3851], 95.00th=[ 4293], 00:28:55.126 | 99.00th=[ 5080], 99.50th=[ 5342], 99.90th=[ 5800], 99.95th=[47449], 00:28:55.126 | 99.99th=[47449] 00:28:55.126 bw ( KiB/s): min=17603, max=20352, per=23.72%, avg=19085.10, stdev=688.33, samples=10 00:28:55.126 iops : min= 2200, max= 2544, avg=2385.60, stdev=86.13, samples=10 00:28:55.126 lat (usec) : 750=0.05%, 1000=0.11% 00:28:55.126 lat (msec) : 2=0.68%, 4=91.48%, 10=7.62%, 50=0.07% 00:28:55.126 cpu : usr=97.60%, sys=2.12%, ctx=7, majf=0, minf=1636 00:28:55.126 IO depths : 1=0.1%, 2=5.8%, 4=66.1%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.126 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.126 issued rwts: total=11934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.126 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:55.126 filename1: (groupid=0, jobs=1): err= 0: pid=1397666: Wed Apr 24 21:34:08 2024 00:28:55.126 read: IOPS=2469, BW=19.3MiB/s (20.2MB/s)(96.5MiB/5004msec) 00:28:55.126 slat (nsec): min=3618, max=41116, avg=7914.11, stdev=2893.65 00:28:55.126 clat (usec): min=606, max=10698, avg=3216.86, stdev=532.93 00:28:55.126 lat (usec): min=615, max=10720, avg=3224.78, stdev=532.91 00:28:55.126 clat percentiles (usec): 00:28:55.126 | 1.00th=[ 2040], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2900], 00:28:55.126 | 30.00th=[ 3064], 40.00th=[ 3130], 50.00th=[ 3195], 60.00th=[ 3228], 00:28:55.126 | 70.00th=[ 3294], 80.00th=[ 3458], 90.00th=[ 3785], 95.00th=[ 4113], 00:28:55.126 | 99.00th=[ 5080], 99.50th=[ 5342], 99.90th=[ 5735], 99.95th=[10552], 00:28:55.126 | 99.99th=[10552] 00:28:55.126 bw ( KiB/s): min=19344, max=20416, per=24.56%, avg=19758.40, stdev=378.36, samples=10 00:28:55.126 iops : min= 2418, max= 2552, avg=2469.80, stdev=47.30, samples=10 00:28:55.126 lat (usec) : 750=0.03%, 1000=0.10% 00:28:55.126 lat (msec) : 2=0.74%, 4=93.23%, 10=5.83%, 20=0.06% 00:28:55.126 cpu : usr=98.14%, sys=1.58%, ctx=6, majf=0, minf=1635 00:28:55.126 IO depths : 1=0.1%, 2=5.5%, 4=65.6%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.126 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.126 issued rwts: total=12357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.126 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:55.126 00:28:55.126 Run status group 0 (all jobs): 00:28:55.127 READ: bw=78.6MiB/s (82.4MB/s), 18.6MiB/s-20.7MiB/s (19.5MB/s-21.7MB/s), io=393MiB (412MB), run=5001-5004msec 00:28:55.127 ----------------------------------------------------- 00:28:55.127 Suppressions used: 00:28:55.127 count bytes template 00:28:55.127 6 52 /usr/src/fio/parse.c 00:28:55.127 1 8 libtcmalloc_minimal.so 00:28:55.127 1 904 libcrypto.so 00:28:55.127 ----------------------------------------------------- 00:28:55.127 00:28:55.127 21:34:09 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:55.127 21:34:09 -- target/dif.sh@43 -- # local sub 00:28:55.127 21:34:09 -- target/dif.sh@45 -- # for sub in "$@" 00:28:55.127 21:34:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:55.127 21:34:09 -- target/dif.sh@36 -- # local sub_id=0 00:28:55.127 21:34:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:55.127 21:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.127 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.127 21:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.127 21:34:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:55.127 21:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.127 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.127 21:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.127 21:34:09 -- target/dif.sh@45 -- # for sub in "$@" 00:28:55.127 21:34:09 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:55.127 21:34:09 -- target/dif.sh@36 -- # local sub_id=1 00:28:55.127 21:34:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:55.127 21:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.127 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.127 21:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.127 21:34:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:55.127 21:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.127 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.127 21:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.127 00:28:55.127 real 0m26.139s 00:28:55.127 user 5m20.177s 00:28:55.127 sys 0m4.660s 00:28:55.127 21:34:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:55.127 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.127 ************************************ 00:28:55.127 END TEST fio_dif_rand_params 00:28:55.127 ************************************ 00:28:55.127 21:34:09 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:55.127 21:34:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:55.127 21:34:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:55.127 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.127 ************************************ 00:28:55.127 START TEST fio_dif_digest 00:28:55.127 ************************************ 00:28:55.127 21:34:09 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:28:55.127 21:34:09 -- target/dif.sh@123 -- # local NULL_DIF 00:28:55.127 21:34:09 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:55.127 21:34:09 -- target/dif.sh@125 -- # local hdgst ddgst 00:28:55.127 21:34:09 -- target/dif.sh@127 -- # NULL_DIF=3 00:28:55.127 21:34:09 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:55.127 21:34:09 -- target/dif.sh@127 -- # numjobs=3 00:28:55.127 21:34:09 -- target/dif.sh@127 -- # iodepth=3 00:28:55.127 21:34:09 -- target/dif.sh@127 -- # runtime=10 00:28:55.127 21:34:09 -- target/dif.sh@128 -- # hdgst=true 00:28:55.127 21:34:09 -- target/dif.sh@128 -- # ddgst=true 00:28:55.127 21:34:09 -- target/dif.sh@130 -- # create_subsystems 0 00:28:55.127 21:34:09 -- target/dif.sh@28 -- # local sub 00:28:55.127 21:34:09 -- target/dif.sh@30 -- # for sub in "$@" 00:28:55.127 21:34:09 -- target/dif.sh@31 -- # create_subsystem 0 00:28:55.127 21:34:09 -- target/dif.sh@18 -- # local sub_id=0 00:28:55.127 21:34:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:55.127 21:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.127 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.127 bdev_null0 00:28:55.127 21:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.127 21:34:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:55.127 21:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.127 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.127 21:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.127 21:34:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:55.127 21:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.127 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.127 21:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.127 21:34:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:55.127 21:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.127 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.127 [2024-04-24 21:34:09.727216] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.127 21:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.127 21:34:09 -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:55.127 21:34:09 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:55.127 21:34:09 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.127 21:34:09 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.127 21:34:09 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:55.127 21:34:09 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:55.127 21:34:09 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:55.127 21:34:09 -- nvmf/common.sh@521 -- # config=() 00:28:55.127 21:34:09 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:55.127 21:34:09 -- nvmf/common.sh@521 -- # local subsystem config 00:28:55.127 21:34:09 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.127 21:34:09 -- common/autotest_common.sh@1327 -- # shift 00:28:55.127 21:34:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:28:55.127 21:34:09 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:55.127 21:34:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:28:55.127 { 00:28:55.127 "params": { 00:28:55.127 "name": "Nvme$subsystem", 00:28:55.127 "trtype": "$TEST_TRANSPORT", 00:28:55.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.127 "adrfam": "ipv4", 00:28:55.127 "trsvcid": "$NVMF_PORT", 00:28:55.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.127 "hdgst": ${hdgst:-false}, 00:28:55.127 "ddgst": ${ddgst:-false} 00:28:55.127 }, 00:28:55.127 "method": "bdev_nvme_attach_controller" 00:28:55.127 } 00:28:55.127 EOF 00:28:55.127 )") 00:28:55.127 21:34:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:55.127 21:34:09 -- target/dif.sh@82 -- # gen_fio_conf 00:28:55.127 21:34:09 -- target/dif.sh@54 -- # local file 00:28:55.127 21:34:09 -- target/dif.sh@56 -- # cat 00:28:55.127 21:34:09 -- nvmf/common.sh@543 -- # cat 00:28:55.127 21:34:09 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.127 21:34:09 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:55.127 21:34:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:55.127 21:34:09 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:55.127 21:34:09 -- target/dif.sh@72 -- # (( file <= files )) 00:28:55.127 21:34:09 -- nvmf/common.sh@545 -- # jq . 00:28:55.127 21:34:09 -- nvmf/common.sh@546 -- # IFS=, 00:28:55.127 21:34:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:28:55.127 "params": { 00:28:55.127 "name": "Nvme0", 00:28:55.127 "trtype": "tcp", 00:28:55.127 "traddr": "10.0.0.2", 00:28:55.127 "adrfam": "ipv4", 00:28:55.127 "trsvcid": "4420", 00:28:55.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:55.127 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:55.127 "hdgst": true, 00:28:55.127 "ddgst": true 00:28:55.127 }, 00:28:55.127 "method": "bdev_nvme_attach_controller" 00:28:55.127 }' 00:28:55.127 21:34:09 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:55.127 21:34:09 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:55.127 21:34:09 -- common/autotest_common.sh@1333 -- # break 00:28:55.127 21:34:09 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:55.127 21:34:09 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.385 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:55.385 ... 00:28:55.385 fio-3.35 00:28:55.385 Starting 3 threads 00:28:55.385 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.714 00:29:07.714 filename0: (groupid=0, jobs=1): err= 0: pid=1399288: Wed Apr 24 21:34:20 2024 00:29:07.714 read: IOPS=304, BW=38.1MiB/s (40.0MB/s)(383MiB/10046msec) 00:29:07.714 slat (nsec): min=6178, max=26712, avg=7897.26, stdev=1571.03 00:29:07.714 clat (usec): min=7559, max=50406, avg=9815.90, stdev=1226.37 00:29:07.714 lat (usec): min=7566, max=50415, avg=9823.80, stdev=1226.39 00:29:07.714 clat percentiles (usec): 00:29:07.714 | 1.00th=[ 8160], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:29:07.714 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:29:07.714 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:29:07.714 | 99.00th=[11863], 99.50th=[12387], 99.90th=[13566], 99.95th=[46924], 00:29:07.714 | 99.99th=[50594] 00:29:07.714 bw ( KiB/s): min=36864, max=40192, per=35.63%, avg=39180.80, stdev=955.97, samples=20 00:29:07.714 iops : min= 288, max= 314, avg=306.10, stdev= 7.47, samples=20 00:29:07.714 lat (msec) : 10=62.32%, 20=37.61%, 50=0.03%, 100=0.03% 00:29:07.714 cpu : usr=97.48%, sys=2.26%, ctx=14, majf=0, minf=1638 00:29:07.714 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.714 issued rwts: total=3063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.714 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:07.714 filename0: (groupid=0, jobs=1): err= 0: pid=1399289: Wed Apr 24 21:34:20 2024 00:29:07.714 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(342MiB/10046msec) 00:29:07.714 slat (nsec): min=3937, max=18477, avg=7662.20, stdev=1185.65 00:29:07.714 clat (usec): min=8360, max=48512, avg=11005.87, stdev=1317.10 00:29:07.714 lat (usec): min=8369, max=48520, avg=11013.53, stdev=1317.13 00:29:07.714 clat percentiles (usec): 00:29:07.714 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10290], 00:29:07.714 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:29:07.714 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:29:07.714 | 99.00th=[13435], 99.50th=[13960], 99.90th=[15926], 99.95th=[48497], 00:29:07.714 | 99.99th=[48497] 00:29:07.714 bw ( KiB/s): min=32768, max=36096, per=31.78%, avg=34944.00, stdev=857.14, samples=20 00:29:07.714 iops : min= 256, max= 282, avg=273.00, stdev= 6.70, samples=20 00:29:07.714 lat (msec) : 10=9.70%, 20=90.23%, 50=0.07% 00:29:07.714 cpu : usr=97.77%, sys=1.97%, ctx=14, majf=0, minf=1634 00:29:07.714 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.714 issued rwts: total=2732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.714 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:07.714 filename0: (groupid=0, jobs=1): err= 0: pid=1399290: Wed Apr 24 21:34:20 2024 00:29:07.714 read: IOPS=282, BW=35.3MiB/s (37.0MB/s)(354MiB/10046msec) 00:29:07.714 slat (nsec): min=4439, max=18840, avg=7673.58, stdev=1135.91 00:29:07.714 clat (usec): min=8204, max=48513, avg=10607.52, stdev=1258.12 00:29:07.714 lat (usec): min=8212, max=48520, avg=10615.19, stdev=1258.17 00:29:07.714 clat percentiles (usec): 00:29:07.714 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:29:07.714 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:29:07.714 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:29:07.714 | 99.00th=[12649], 99.50th=[13173], 99.90th=[16319], 99.95th=[46924], 00:29:07.714 | 99.99th=[48497] 00:29:07.714 bw ( KiB/s): min=33792, max=37376, per=32.97%, avg=36249.60, stdev=763.04, samples=20 00:29:07.714 iops : min= 264, max= 292, avg=283.20, stdev= 5.96, samples=20 00:29:07.714 lat (msec) : 10=21.87%, 20=78.06%, 50=0.07% 00:29:07.714 cpu : usr=97.80%, sys=1.94%, ctx=13, majf=0, minf=1633 00:29:07.714 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:07.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.714 issued rwts: total=2835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.714 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:07.714 00:29:07.714 Run status group 0 (all jobs): 00:29:07.715 READ: bw=107MiB/s (113MB/s), 34.0MiB/s-38.1MiB/s (35.6MB/s-40.0MB/s), io=1079MiB (1131MB), run=10046-10046msec 00:29:07.715 ----------------------------------------------------- 00:29:07.715 Suppressions used: 00:29:07.715 count bytes template 00:29:07.715 5 44 /usr/src/fio/parse.c 00:29:07.715 1 8 libtcmalloc_minimal.so 00:29:07.715 1 904 libcrypto.so 00:29:07.715 ----------------------------------------------------- 00:29:07.715 00:29:07.715 21:34:21 -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:07.715 21:34:21 -- target/dif.sh@43 -- # local sub 00:29:07.715 21:34:21 -- target/dif.sh@45 -- # for sub in "$@" 00:29:07.715 21:34:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:07.715 21:34:21 -- target/dif.sh@36 -- # local sub_id=0 00:29:07.715 21:34:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:07.715 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.715 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:29:07.715 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.715 21:34:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:07.715 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.715 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:29:07.715 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.715 00:29:07.715 real 0m11.800s 00:29:07.715 user 0m46.458s 00:29:07.715 sys 0m1.017s 00:29:07.715 21:34:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:07.715 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:29:07.715 ************************************ 00:29:07.715 END TEST fio_dif_digest 00:29:07.715 ************************************ 00:29:07.715 21:34:21 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:07.715 21:34:21 -- target/dif.sh@147 -- # nvmftestfini 00:29:07.715 21:34:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:07.715 21:34:21 -- nvmf/common.sh@117 -- # sync 00:29:07.715 21:34:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:07.715 21:34:21 -- nvmf/common.sh@120 -- # set +e 00:29:07.715 21:34:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:07.715 21:34:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:07.715 rmmod nvme_tcp 00:29:07.715 rmmod nvme_fabrics 00:29:07.715 rmmod nvme_keyring 00:29:07.715 21:34:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:07.715 21:34:21 -- nvmf/common.sh@124 -- # set -e 00:29:07.715 21:34:21 -- nvmf/common.sh@125 -- # return 0 00:29:07.715 21:34:21 -- nvmf/common.sh@478 -- # '[' -n 1387932 ']' 00:29:07.715 21:34:21 -- nvmf/common.sh@479 -- # killprocess 1387932 00:29:07.715 21:34:21 -- common/autotest_common.sh@936 -- # '[' -z 1387932 ']' 00:29:07.715 21:34:21 -- common/autotest_common.sh@940 -- # kill -0 1387932 00:29:07.715 21:34:21 -- common/autotest_common.sh@941 -- # uname 00:29:07.715 21:34:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:07.715 21:34:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1387932 00:29:07.715 21:34:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:07.715 21:34:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:07.715 21:34:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1387932' 00:29:07.715 killing process with pid 1387932 00:29:07.715 21:34:21 -- common/autotest_common.sh@955 -- # kill 1387932 00:29:07.715 21:34:21 -- common/autotest_common.sh@960 -- # wait 1387932 00:29:07.715 21:34:22 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:29:07.715 21:34:22 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:29:09.636 Waiting for block devices as requested 00:29:09.636 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:29:09.895 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:09.895 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:10.153 0000:cb:00.0 (8086 0a54): vfio-pci -> nvme 00:29:10.153 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:10.411 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:29:10.411 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:10.411 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:29:10.669 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:10.669 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:29:10.929 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:10.929 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:29:10.929 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:29:11.190 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:29:11.190 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:29:11.451 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:11.451 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:29:11.712 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:11.712 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:29:12.279 21:34:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:12.279 21:34:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:12.279 21:34:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:12.279 21:34:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:12.279 21:34:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.279 21:34:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:12.279 21:34:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.187 21:34:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:14.187 00:29:14.187 real 1m19.651s 00:29:14.187 user 8m12.441s 00:29:14.187 sys 0m17.841s 00:29:14.187 21:34:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:14.187 21:34:29 -- common/autotest_common.sh@10 -- # set +x 00:29:14.187 ************************************ 00:29:14.187 END TEST nvmf_dif 00:29:14.187 ************************************ 00:29:14.187 21:34:29 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:14.187 21:34:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:14.187 21:34:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:14.187 21:34:29 -- common/autotest_common.sh@10 -- # set +x 00:29:14.445 ************************************ 00:29:14.445 START TEST nvmf_abort_qd_sizes 00:29:14.445 ************************************ 00:29:14.445 21:34:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:14.445 * Looking for test storage... 00:29:14.445 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:29:14.445 21:34:29 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.445 21:34:29 -- nvmf/common.sh@7 -- # uname -s 00:29:14.445 21:34:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.445 21:34:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.445 21:34:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.446 21:34:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.446 21:34:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.446 21:34:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.446 21:34:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.446 21:34:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.446 21:34:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.446 21:34:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.446 21:34:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:29:14.446 21:34:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:29:14.446 21:34:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.446 21:34:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.446 21:34:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:14.446 21:34:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.446 21:34:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:14.446 21:34:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.446 21:34:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.446 21:34:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.446 21:34:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.446 21:34:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.446 21:34:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.446 21:34:29 -- paths/export.sh@5 -- # export PATH 00:29:14.446 21:34:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.446 21:34:29 -- nvmf/common.sh@47 -- # : 0 00:29:14.446 21:34:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:14.446 21:34:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:14.446 21:34:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.446 21:34:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.446 21:34:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.446 21:34:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:14.446 21:34:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:14.446 21:34:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:14.446 21:34:29 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:14.446 21:34:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:14.446 21:34:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.446 21:34:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:14.446 21:34:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:14.446 21:34:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:14.446 21:34:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.446 21:34:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:14.446 21:34:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.446 21:34:29 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:29:14.446 21:34:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:14.446 21:34:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:14.446 21:34:29 -- common/autotest_common.sh@10 -- # set +x 00:29:19.720 21:34:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:19.720 21:34:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:19.720 21:34:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:19.720 21:34:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:19.720 21:34:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:19.720 21:34:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:19.720 21:34:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:19.720 21:34:34 -- nvmf/common.sh@295 -- # net_devs=() 00:29:19.720 21:34:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:19.720 21:34:34 -- nvmf/common.sh@296 -- # e810=() 00:29:19.720 21:34:34 -- nvmf/common.sh@296 -- # local -ga e810 00:29:19.720 21:34:34 -- nvmf/common.sh@297 -- # x722=() 00:29:19.720 21:34:34 -- nvmf/common.sh@297 -- # local -ga x722 00:29:19.720 21:34:34 -- nvmf/common.sh@298 -- # mlx=() 00:29:19.721 21:34:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:19.721 21:34:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.721 21:34:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.721 21:34:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.721 21:34:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.721 21:34:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.721 21:34:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.721 21:34:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.721 21:34:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.721 21:34:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.721 21:34:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.721 21:34:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.721 21:34:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:19.721 21:34:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:19.721 21:34:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.721 21:34:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:19.721 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:19.721 21:34:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.721 21:34:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:19.721 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:19.721 21:34:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:19.721 21:34:34 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.721 21:34:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.721 21:34:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:19.721 21:34:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.721 21:34:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:19.721 Found net devices under 0000:27:00.0: cvl_0_0 00:29:19.721 21:34:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.721 21:34:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.721 21:34:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.721 21:34:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:19.721 21:34:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.721 21:34:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:19.721 Found net devices under 0000:27:00.1: cvl_0_1 00:29:19.721 21:34:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.721 21:34:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:19.721 21:34:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:19.721 21:34:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:19.721 21:34:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:19.721 21:34:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.721 21:34:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.721 21:34:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.721 21:34:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:19.721 21:34:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.721 21:34:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.721 21:34:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:19.721 21:34:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.721 21:34:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.721 21:34:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:19.721 21:34:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:19.721 21:34:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.721 21:34:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.721 21:34:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.721 21:34:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.721 21:34:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:19.721 21:34:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.721 21:34:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.721 21:34:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.721 21:34:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:19.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:29:19.721 00:29:19.721 --- 10.0.0.2 ping statistics --- 00:29:19.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.721 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:29:19.721 21:34:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:29:19.721 00:29:19.721 --- 10.0.0.1 ping statistics --- 00:29:19.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.721 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:29:19.721 21:34:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.721 21:34:34 -- nvmf/common.sh@411 -- # return 0 00:29:19.721 21:34:34 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:29:19.721 21:34:34 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:29:22.258 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:22.258 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:22.516 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:22.516 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:29:22.516 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:22.516 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:29:22.516 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:22.516 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:29:22.516 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:22.516 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:29:22.516 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:29:22.774 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:29:22.774 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:22.775 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:29:22.775 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:22.775 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:29:24.151 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:29:24.412 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:29:24.671 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:29:25.237 21:34:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.237 21:34:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:25.237 21:34:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:25.237 21:34:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.237 21:34:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:25.237 21:34:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:25.237 21:34:39 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:25.237 21:34:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:25.237 21:34:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:25.237 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:29:25.237 21:34:39 -- nvmf/common.sh@470 -- # nvmfpid=1409147 00:29:25.237 21:34:39 -- nvmf/common.sh@471 -- # waitforlisten 1409147 00:29:25.237 21:34:39 -- common/autotest_common.sh@817 -- # '[' -z 1409147 ']' 00:29:25.237 21:34:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.237 21:34:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:25.237 21:34:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.237 21:34:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:25.237 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:29:25.237 21:34:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:25.237 [2024-04-24 21:34:40.021947] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:29:25.237 [2024-04-24 21:34:40.022049] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.237 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.237 [2024-04-24 21:34:40.144144] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.495 [2024-04-24 21:34:40.237817] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.495 [2024-04-24 21:34:40.237857] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.495 [2024-04-24 21:34:40.237871] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.495 [2024-04-24 21:34:40.237882] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.495 [2024-04-24 21:34:40.237891] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.495 [2024-04-24 21:34:40.237977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.495 [2024-04-24 21:34:40.237997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.495 [2024-04-24 21:34:40.238091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.495 [2024-04-24 21:34:40.238103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:26.063 21:34:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:26.063 21:34:40 -- common/autotest_common.sh@850 -- # return 0 00:29:26.063 21:34:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:26.063 21:34:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:26.063 21:34:40 -- common/autotest_common.sh@10 -- # set +x 00:29:26.063 21:34:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.063 21:34:40 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:26.063 21:34:40 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:26.063 21:34:40 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:26.063 21:34:40 -- scripts/common.sh@309 -- # local bdf bdfs 00:29:26.063 21:34:40 -- scripts/common.sh@310 -- # local nvmes 00:29:26.063 21:34:40 -- scripts/common.sh@312 -- # [[ -n 0000:c9:00.0 0000:ca:00.0 0000:cb:00.0 ]] 00:29:26.063 21:34:40 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:26.063 21:34:40 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:26.063 21:34:40 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:c9:00.0 ]] 00:29:26.063 21:34:40 -- scripts/common.sh@320 -- # uname -s 00:29:26.063 21:34:40 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:26.063 21:34:40 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:26.063 21:34:40 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:26.063 21:34:40 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:ca:00.0 ]] 00:29:26.063 21:34:40 -- scripts/common.sh@320 -- # uname -s 00:29:26.063 21:34:40 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:26.063 21:34:40 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:26.063 21:34:40 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:26.063 21:34:40 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:cb:00.0 ]] 00:29:26.063 21:34:40 -- scripts/common.sh@320 -- # uname -s 00:29:26.064 21:34:40 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:26.064 21:34:40 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:26.064 21:34:40 -- scripts/common.sh@325 -- # (( 3 )) 00:29:26.064 21:34:40 -- scripts/common.sh@326 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 0000:cb:00.0 00:29:26.064 21:34:40 -- target/abort_qd_sizes.sh@76 -- # (( 3 > 0 )) 00:29:26.064 21:34:40 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:c9:00.0 00:29:26.064 21:34:40 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:26.064 21:34:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:26.064 21:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:26.064 21:34:40 -- common/autotest_common.sh@10 -- # set +x 00:29:26.064 ************************************ 00:29:26.064 START TEST spdk_target_abort 00:29:26.064 ************************************ 00:29:26.064 21:34:40 -- common/autotest_common.sh@1111 -- # spdk_target 00:29:26.064 21:34:40 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:26.064 21:34:40 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:c9:00.0 -b spdk_target 00:29:26.064 21:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:26.064 21:34:40 -- common/autotest_common.sh@10 -- # set +x 00:29:29.350 spdk_targetn1 00:29:29.350 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.350 21:34:43 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:29.350 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.350 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:29:29.350 [2024-04-24 21:34:43.746675] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.350 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.350 21:34:43 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:29.350 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.350 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:29:29.350 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:29.351 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.351 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:29:29.351 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:29.351 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:29.351 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:29:29.351 [2024-04-24 21:34:43.780118] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.351 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:29.351 21:34:43 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:29.351 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.645 Initializing NVMe Controllers 00:29:32.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:32.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:32.645 Initialization complete. Launching workers. 00:29:32.645 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16435, failed: 0 00:29:32.645 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1830, failed to submit 14605 00:29:32.645 success 762, unsuccess 1068, failed 0 00:29:32.645 21:34:47 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:32.645 21:34:47 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:32.645 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.932 Initializing NVMe Controllers 00:29:35.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:35.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:35.932 Initialization complete. Launching workers. 00:29:35.932 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8526, failed: 0 00:29:35.932 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1237, failed to submit 7289 00:29:35.932 success 340, unsuccess 897, failed 0 00:29:35.932 21:34:50 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:35.932 21:34:50 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:35.932 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.225 Initializing NVMe Controllers 00:29:39.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:39.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:39.225 Initialization complete. Launching workers. 00:29:39.225 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39630, failed: 0 00:29:39.225 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2658, failed to submit 36972 00:29:39.225 success 610, unsuccess 2048, failed 0 00:29:39.225 21:34:53 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:39.225 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.225 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:29:39.225 21:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:39.225 21:34:53 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:39.225 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:39.225 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:29:41.133 21:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:41.133 21:34:56 -- target/abort_qd_sizes.sh@61 -- # killprocess 1409147 00:29:41.133 21:34:56 -- common/autotest_common.sh@936 -- # '[' -z 1409147 ']' 00:29:41.133 21:34:56 -- common/autotest_common.sh@940 -- # kill -0 1409147 00:29:41.133 21:34:56 -- common/autotest_common.sh@941 -- # uname 00:29:41.133 21:34:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:41.133 21:34:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1409147 00:29:41.393 21:34:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:41.393 21:34:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:41.393 21:34:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1409147' 00:29:41.393 killing process with pid 1409147 00:29:41.393 21:34:56 -- common/autotest_common.sh@955 -- # kill 1409147 00:29:41.393 21:34:56 -- common/autotest_common.sh@960 -- # wait 1409147 00:29:41.651 00:29:41.651 real 0m15.589s 00:29:41.651 user 1m2.835s 00:29:41.651 sys 0m1.212s 00:29:41.651 21:34:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:41.651 21:34:56 -- common/autotest_common.sh@10 -- # set +x 00:29:41.651 ************************************ 00:29:41.651 END TEST spdk_target_abort 00:29:41.651 ************************************ 00:29:41.651 21:34:56 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:41.651 21:34:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:41.651 21:34:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:41.651 21:34:56 -- common/autotest_common.sh@10 -- # set +x 00:29:41.908 ************************************ 00:29:41.908 START TEST kernel_target_abort 00:29:41.908 ************************************ 00:29:41.908 21:34:56 -- common/autotest_common.sh@1111 -- # kernel_target 00:29:41.908 21:34:56 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:41.908 21:34:56 -- nvmf/common.sh@717 -- # local ip 00:29:41.908 21:34:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:29:41.908 21:34:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:29:41.908 21:34:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.908 21:34:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.908 21:34:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:29:41.908 21:34:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.908 21:34:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:29:41.908 21:34:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:29:41.908 21:34:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:29:41.908 21:34:56 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:41.908 21:34:56 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:41.908 21:34:56 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:29:41.908 21:34:56 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:41.908 21:34:56 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:41.908 21:34:56 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:41.908 21:34:56 -- nvmf/common.sh@628 -- # local block nvme 00:29:41.908 21:34:56 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:29:41.908 21:34:56 -- nvmf/common.sh@631 -- # modprobe nvmet 00:29:41.908 21:34:56 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:41.908 21:34:56 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:29:44.469 Waiting for block devices as requested 00:29:44.469 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:29:44.469 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:44.469 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:44.469 0000:cb:00.0 (8086 0a54): vfio-pci -> nvme 00:29:44.728 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:44.728 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:29:44.990 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:44.990 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:29:45.250 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:45.251 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:29:45.251 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:45.510 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:29:45.510 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:29:45.770 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:29:45.770 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:29:46.028 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:46.028 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:29:46.286 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:46.286 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:29:47.223 21:35:02 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:47.223 21:35:02 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:47.223 21:35:02 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:29:47.223 21:35:02 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:47.223 21:35:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:47.223 21:35:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:47.223 21:35:02 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:29:47.223 21:35:02 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:47.223 21:35:02 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:47.223 No valid GPT data, bailing 00:29:47.223 21:35:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:47.223 21:35:02 -- scripts/common.sh@391 -- # pt= 00:29:47.223 21:35:02 -- scripts/common.sh@392 -- # return 1 00:29:47.223 21:35:02 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:29:47.223 21:35:02 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:47.223 21:35:02 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:29:47.223 21:35:02 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:29:47.223 21:35:02 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:29:47.223 21:35:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:47.223 21:35:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:47.223 21:35:02 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:29:47.223 21:35:02 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:29:47.223 21:35:02 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:29:47.223 No valid GPT data, bailing 00:29:47.223 21:35:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:47.223 21:35:02 -- scripts/common.sh@391 -- # pt= 00:29:47.223 21:35:02 -- scripts/common.sh@392 -- # return 1 00:29:47.223 21:35:02 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:29:47.223 21:35:02 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:29:47.223 21:35:02 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme2n1 ]] 00:29:47.223 21:35:02 -- nvmf/common.sh@641 -- # is_block_zoned nvme2n1 00:29:47.483 21:35:02 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:29:47.483 21:35:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:29:47.483 21:35:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:47.483 21:35:02 -- nvmf/common.sh@642 -- # block_in_use nvme2n1 00:29:47.483 21:35:02 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:29:47.483 21:35:02 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:29:47.483 No valid GPT data, bailing 00:29:47.483 21:35:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:29:47.483 21:35:02 -- scripts/common.sh@391 -- # pt= 00:29:47.483 21:35:02 -- scripts/common.sh@392 -- # return 1 00:29:47.483 21:35:02 -- nvmf/common.sh@642 -- # nvme=/dev/nvme2n1 00:29:47.483 21:35:02 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme2n1 ]] 00:29:47.483 21:35:02 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:47.483 21:35:02 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:47.483 21:35:02 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:47.483 21:35:02 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:47.483 21:35:02 -- nvmf/common.sh@656 -- # echo 1 00:29:47.484 21:35:02 -- nvmf/common.sh@657 -- # echo /dev/nvme2n1 00:29:47.484 21:35:02 -- nvmf/common.sh@658 -- # echo 1 00:29:47.484 21:35:02 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:29:47.484 21:35:02 -- nvmf/common.sh@661 -- # echo tcp 00:29:47.484 21:35:02 -- nvmf/common.sh@662 -- # echo 4420 00:29:47.484 21:35:02 -- nvmf/common.sh@663 -- # echo ipv4 00:29:47.484 21:35:02 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:47.484 21:35:02 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 --hostid=80b7babf-2e5c-ee11-906e-a4bf01970bf2 -a 10.0.0.1 -t tcp -s 4420 00:29:47.484 00:29:47.484 Discovery Log Number of Records 2, Generation counter 2 00:29:47.484 =====Discovery Log Entry 0====== 00:29:47.484 trtype: tcp 00:29:47.484 adrfam: ipv4 00:29:47.484 subtype: current discovery subsystem 00:29:47.484 treq: not specified, sq flow control disable supported 00:29:47.484 portid: 1 00:29:47.484 trsvcid: 4420 00:29:47.484 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:47.484 traddr: 10.0.0.1 00:29:47.484 eflags: none 00:29:47.484 sectype: none 00:29:47.484 =====Discovery Log Entry 1====== 00:29:47.484 trtype: tcp 00:29:47.484 adrfam: ipv4 00:29:47.484 subtype: nvme subsystem 00:29:47.484 treq: not specified, sq flow control disable supported 00:29:47.484 portid: 1 00:29:47.484 trsvcid: 4420 00:29:47.484 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:47.484 traddr: 10.0.0.1 00:29:47.484 eflags: none 00:29:47.484 sectype: none 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:47.484 21:35:02 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:47.484 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.858 Initializing NVMe Controllers 00:29:50.858 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:50.858 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:50.858 Initialization complete. Launching workers. 00:29:50.858 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65973, failed: 0 00:29:50.858 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 65973, failed to submit 0 00:29:50.858 success 0, unsuccess 65973, failed 0 00:29:50.858 21:35:05 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:50.858 21:35:05 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:50.858 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.150 Initializing NVMe Controllers 00:29:54.150 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:54.150 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:54.150 Initialization complete. Launching workers. 00:29:54.150 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 115317, failed: 0 00:29:54.150 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29022, failed to submit 86295 00:29:54.150 success 0, unsuccess 29022, failed 0 00:29:54.150 21:35:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:54.150 21:35:08 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:54.150 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.436 Initializing NVMe Controllers 00:29:57.436 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:57.436 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:57.436 Initialization complete. Launching workers. 00:29:57.436 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 112526, failed: 0 00:29:57.436 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28142, failed to submit 84384 00:29:57.436 success 0, unsuccess 28142, failed 0 00:29:57.436 21:35:11 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:57.436 21:35:11 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:57.436 21:35:11 -- nvmf/common.sh@675 -- # echo 0 00:29:57.436 21:35:11 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:57.436 21:35:11 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:57.436 21:35:11 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:57.436 21:35:11 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:57.436 21:35:11 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:29:57.436 21:35:11 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:29:57.436 21:35:11 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:29:59.971 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:59.971 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:59.971 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:59.971 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:29:59.971 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:59.971 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:29:59.971 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:59.971 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:29:59.971 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:59.971 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:29:59.971 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:29:59.971 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:29:59.971 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:59.971 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:29:59.971 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:29:59.971 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:30:01.882 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:30:01.882 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:30:01.882 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:30:02.144 00:30:02.144 real 0m20.424s 00:30:02.144 user 0m7.711s 00:30:02.144 sys 0m5.724s 00:30:02.144 21:35:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:02.144 21:35:17 -- common/autotest_common.sh@10 -- # set +x 00:30:02.144 ************************************ 00:30:02.144 END TEST kernel_target_abort 00:30:02.144 ************************************ 00:30:02.144 21:35:17 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:02.144 21:35:17 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:02.144 21:35:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:02.144 21:35:17 -- nvmf/common.sh@117 -- # sync 00:30:02.144 21:35:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:02.144 21:35:17 -- nvmf/common.sh@120 -- # set +e 00:30:02.144 21:35:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:02.144 21:35:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:02.144 rmmod nvme_tcp 00:30:02.144 rmmod nvme_fabrics 00:30:02.404 rmmod nvme_keyring 00:30:02.404 21:35:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:02.404 21:35:17 -- nvmf/common.sh@124 -- # set -e 00:30:02.404 21:35:17 -- nvmf/common.sh@125 -- # return 0 00:30:02.404 21:35:17 -- nvmf/common.sh@478 -- # '[' -n 1409147 ']' 00:30:02.404 21:35:17 -- nvmf/common.sh@479 -- # killprocess 1409147 00:30:02.404 21:35:17 -- common/autotest_common.sh@936 -- # '[' -z 1409147 ']' 00:30:02.404 21:35:17 -- common/autotest_common.sh@940 -- # kill -0 1409147 00:30:02.404 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1409147) - No such process 00:30:02.404 21:35:17 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1409147 is not found' 00:30:02.404 Process with pid 1409147 is not found 00:30:02.404 21:35:17 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:30:02.404 21:35:17 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:30:04.939 Waiting for block devices as requested 00:30:04.939 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:30:04.939 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:04.939 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:04.939 0000:cb:00.0 (8086 0a54): vfio-pci -> nvme 00:30:05.196 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:05.196 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:30:05.196 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:05.454 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:30:05.454 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:05.714 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:30:05.714 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:05.714 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:30:05.971 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:30:05.971 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:30:06.231 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:30:06.231 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:06.491 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:30:06.491 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:06.750 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:30:07.010 21:35:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:07.010 21:35:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:07.010 21:35:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:07.010 21:35:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:07.010 21:35:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.010 21:35:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:07.010 21:35:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.547 21:35:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:09.547 00:30:09.547 real 0m54.802s 00:30:09.547 user 1m14.469s 00:30:09.547 sys 0m14.955s 00:30:09.547 21:35:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:09.547 21:35:23 -- common/autotest_common.sh@10 -- # set +x 00:30:09.547 ************************************ 00:30:09.547 END TEST nvmf_abort_qd_sizes 00:30:09.547 ************************************ 00:30:09.547 21:35:24 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/file.sh 00:30:09.547 21:35:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:09.547 21:35:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:09.547 21:35:24 -- common/autotest_common.sh@10 -- # set +x 00:30:09.547 ************************************ 00:30:09.547 START TEST keyring_file 00:30:09.547 ************************************ 00:30:09.547 21:35:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/file.sh 00:30:09.547 * Looking for test storage... 00:30:09.547 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring 00:30:09.547 21:35:24 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/common.sh 00:30:09.547 21:35:24 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.547 21:35:24 -- nvmf/common.sh@7 -- # uname -s 00:30:09.547 21:35:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.547 21:35:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.547 21:35:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.547 21:35:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.547 21:35:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.547 21:35:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.547 21:35:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.547 21:35:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.547 21:35:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.547 21:35:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.547 21:35:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:30:09.547 21:35:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b7babf-2e5c-ee11-906e-a4bf01970bf2 00:30:09.547 21:35:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.547 21:35:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.547 21:35:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:09.547 21:35:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.547 21:35:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:09.547 21:35:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.547 21:35:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.547 21:35:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.547 21:35:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.547 21:35:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.547 21:35:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.547 21:35:24 -- paths/export.sh@5 -- # export PATH 00:30:09.547 21:35:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.547 21:35:24 -- nvmf/common.sh@47 -- # : 0 00:30:09.548 21:35:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:09.548 21:35:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:09.548 21:35:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.548 21:35:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.548 21:35:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.548 21:35:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:09.548 21:35:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:09.548 21:35:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:09.548 21:35:24 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:09.548 21:35:24 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:09.548 21:35:24 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:09.548 21:35:24 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:09.548 21:35:24 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:09.548 21:35:24 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:09.548 21:35:24 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:09.548 21:35:24 -- keyring/common.sh@15 -- # local name key digest path 00:30:09.548 21:35:24 -- keyring/common.sh@17 -- # name=key0 00:30:09.548 21:35:24 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:09.548 21:35:24 -- keyring/common.sh@17 -- # digest=0 00:30:09.548 21:35:24 -- keyring/common.sh@18 -- # mktemp 00:30:09.548 21:35:24 -- keyring/common.sh@18 -- # path=/tmp/tmp.Zlus2KR2U2 00:30:09.548 21:35:24 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:09.548 21:35:24 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:09.548 21:35:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:09.548 21:35:24 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:30:09.548 21:35:24 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:30:09.548 21:35:24 -- nvmf/common.sh@693 -- # digest=0 00:30:09.548 21:35:24 -- nvmf/common.sh@694 -- # python - 00:30:09.548 21:35:24 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Zlus2KR2U2 00:30:09.548 21:35:24 -- keyring/common.sh@23 -- # echo /tmp/tmp.Zlus2KR2U2 00:30:09.548 21:35:24 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Zlus2KR2U2 00:30:09.548 21:35:24 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:09.548 21:35:24 -- keyring/common.sh@15 -- # local name key digest path 00:30:09.548 21:35:24 -- keyring/common.sh@17 -- # name=key1 00:30:09.548 21:35:24 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:09.548 21:35:24 -- keyring/common.sh@17 -- # digest=0 00:30:09.548 21:35:24 -- keyring/common.sh@18 -- # mktemp 00:30:09.548 21:35:24 -- keyring/common.sh@18 -- # path=/tmp/tmp.5V2JAp0U0m 00:30:09.548 21:35:24 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:09.548 21:35:24 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:09.548 21:35:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:09.548 21:35:24 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:30:09.548 21:35:24 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:30:09.548 21:35:24 -- nvmf/common.sh@693 -- # digest=0 00:30:09.548 21:35:24 -- nvmf/common.sh@694 -- # python - 00:30:09.548 21:35:24 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5V2JAp0U0m 00:30:09.548 21:35:24 -- keyring/common.sh@23 -- # echo /tmp/tmp.5V2JAp0U0m 00:30:09.548 21:35:24 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5V2JAp0U0m 00:30:09.548 21:35:24 -- keyring/file.sh@30 -- # tgtpid=1420513 00:30:09.548 21:35:24 -- keyring/file.sh@32 -- # waitforlisten 1420513 00:30:09.548 21:35:24 -- common/autotest_common.sh@817 -- # '[' -z 1420513 ']' 00:30:09.548 21:35:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.548 21:35:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:09.548 21:35:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.548 21:35:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:09.548 21:35:24 -- keyring/file.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:30:09.548 21:35:24 -- common/autotest_common.sh@10 -- # set +x 00:30:09.548 [2024-04-24 21:35:24.445789] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:30:09.548 [2024-04-24 21:35:24.445939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420513 ] 00:30:09.809 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.809 [2024-04-24 21:35:24.579900] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.809 [2024-04-24 21:35:24.673093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.379 21:35:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:10.379 21:35:25 -- common/autotest_common.sh@850 -- # return 0 00:30:10.379 21:35:25 -- keyring/file.sh@33 -- # rpc_cmd 00:30:10.379 21:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.379 21:35:25 -- common/autotest_common.sh@10 -- # set +x 00:30:10.379 [2024-04-24 21:35:25.155364] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.379 null0 00:30:10.379 [2024-04-24 21:35:25.187343] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:10.379 [2024-04-24 21:35:25.187661] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:10.379 [2024-04-24 21:35:25.195339] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:10.379 21:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.379 21:35:25 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:10.379 21:35:25 -- common/autotest_common.sh@638 -- # local es=0 00:30:10.379 21:35:25 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:10.379 21:35:25 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:30:10.379 21:35:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:10.379 21:35:25 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:30:10.379 21:35:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:10.379 21:35:25 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:10.379 21:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.379 21:35:25 -- common/autotest_common.sh@10 -- # set +x 00:30:10.379 [2024-04-24 21:35:25.211351] nvmf_rpc.c: 766:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:10.379 request: 00:30:10.379 { 00:30:10.379 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.379 "secure_channel": false, 00:30:10.379 "listen_address": { 00:30:10.379 "trtype": "tcp", 00:30:10.379 "traddr": "127.0.0.1", 00:30:10.379 "trsvcid": "4420" 00:30:10.379 }, 00:30:10.379 "method": "nvmf_subsystem_add_listener", 00:30:10.379 "req_id": 1 00:30:10.379 } 00:30:10.379 Got JSON-RPC error response 00:30:10.379 response: 00:30:10.379 { 00:30:10.379 "code": -32602, 00:30:10.379 "message": "Invalid parameters" 00:30:10.379 } 00:30:10.379 21:35:25 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:30:10.379 21:35:25 -- common/autotest_common.sh@641 -- # es=1 00:30:10.379 21:35:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:10.379 21:35:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:10.379 21:35:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:10.379 21:35:25 -- keyring/file.sh@46 -- # bperfpid=1420785 00:30:10.379 21:35:25 -- keyring/file.sh@48 -- # waitforlisten 1420785 /var/tmp/bperf.sock 00:30:10.379 21:35:25 -- common/autotest_common.sh@817 -- # '[' -z 1420785 ']' 00:30:10.379 21:35:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:10.379 21:35:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:10.379 21:35:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:10.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:10.379 21:35:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:10.379 21:35:25 -- common/autotest_common.sh@10 -- # set +x 00:30:10.379 21:35:25 -- keyring/file.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:10.379 [2024-04-24 21:35:25.297818] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:30:10.380 [2024-04-24 21:35:25.297925] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420785 ] 00:30:10.640 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.640 [2024-04-24 21:35:25.407766] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.640 [2024-04-24 21:35:25.561352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.216 21:35:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:11.216 21:35:25 -- common/autotest_common.sh@850 -- # return 0 00:30:11.216 21:35:25 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zlus2KR2U2 00:30:11.216 21:35:25 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zlus2KR2U2 00:30:11.216 21:35:26 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5V2JAp0U0m 00:30:11.216 21:35:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5V2JAp0U0m 00:30:11.474 21:35:26 -- keyring/file.sh@51 -- # jq -r .path 00:30:11.474 21:35:26 -- keyring/file.sh@51 -- # get_key key0 00:30:11.474 21:35:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:11.474 21:35:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:11.474 21:35:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:11.474 21:35:26 -- keyring/file.sh@51 -- # [[ /tmp/tmp.Zlus2KR2U2 == \/\t\m\p\/\t\m\p\.\Z\l\u\s\2\K\R\2\U\2 ]] 00:30:11.474 21:35:26 -- keyring/file.sh@52 -- # get_key key1 00:30:11.474 21:35:26 -- keyring/file.sh@52 -- # jq -r .path 00:30:11.474 21:35:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:11.474 21:35:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:11.474 21:35:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:11.733 21:35:26 -- keyring/file.sh@52 -- # [[ /tmp/tmp.5V2JAp0U0m == \/\t\m\p\/\t\m\p\.\5\V\2\J\A\p\0\U\0\m ]] 00:30:11.733 21:35:26 -- keyring/file.sh@53 -- # get_refcnt key0 00:30:11.733 21:35:26 -- keyring/common.sh@12 -- # get_key key0 00:30:11.733 21:35:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:11.733 21:35:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:11.733 21:35:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:11.733 21:35:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:11.734 21:35:26 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:11.734 21:35:26 -- keyring/file.sh@54 -- # get_refcnt key1 00:30:11.734 21:35:26 -- keyring/common.sh@12 -- # get_key key1 00:30:11.734 21:35:26 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:11.734 21:35:26 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:11.734 21:35:26 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:11.734 21:35:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:11.992 21:35:26 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:11.992 21:35:26 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:11.992 21:35:26 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:11.992 [2024-04-24 21:35:26.950612] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:12.251 nvme0n1 00:30:12.251 21:35:27 -- keyring/file.sh@59 -- # get_refcnt key0 00:30:12.251 21:35:27 -- keyring/common.sh@12 -- # get_key key0 00:30:12.251 21:35:27 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:12.251 21:35:27 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:12.251 21:35:27 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:12.251 21:35:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:12.251 21:35:27 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:12.251 21:35:27 -- keyring/file.sh@60 -- # get_refcnt key1 00:30:12.251 21:35:27 -- keyring/common.sh@12 -- # get_key key1 00:30:12.251 21:35:27 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:12.251 21:35:27 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:12.251 21:35:27 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:12.251 21:35:27 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:12.509 21:35:27 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:12.509 21:35:27 -- keyring/file.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:12.509 Running I/O for 1 seconds... 00:30:13.448 00:30:13.448 Latency(us) 00:30:13.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.448 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:13.448 nvme0n1 : 1.01 9762.28 38.13 0.00 0.00 13035.63 8312.72 21799.34 00:30:13.448 =================================================================================================================== 00:30:13.448 Total : 9762.28 38.13 0.00 0.00 13035.63 8312.72 21799.34 00:30:13.448 0 00:30:13.448 21:35:28 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:13.448 21:35:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:13.709 21:35:28 -- keyring/file.sh@65 -- # get_refcnt key0 00:30:13.709 21:35:28 -- keyring/common.sh@12 -- # get_key key0 00:30:13.709 21:35:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:13.709 21:35:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:13.709 21:35:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:13.709 21:35:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:13.969 21:35:28 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:13.969 21:35:28 -- keyring/file.sh@66 -- # get_refcnt key1 00:30:13.969 21:35:28 -- keyring/common.sh@12 -- # get_key key1 00:30:13.969 21:35:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:13.969 21:35:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:13.969 21:35:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:13.969 21:35:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:13.969 21:35:28 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:13.969 21:35:28 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:13.969 21:35:28 -- common/autotest_common.sh@638 -- # local es=0 00:30:13.969 21:35:28 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:13.969 21:35:28 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:30:13.969 21:35:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:13.969 21:35:28 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:30:13.969 21:35:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:13.969 21:35:28 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:13.969 21:35:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:14.227 [2024-04-24 21:35:28.985809] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:14.227 [2024-04-24 21:35:28.986131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009240 (107): Transport endpoint is not connected 00:30:14.227 [2024-04-24 21:35:28.987111] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009240 (9): Bad file descriptor 00:30:14.227 [2024-04-24 21:35:28.988104] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:14.227 [2024-04-24 21:35:28.988122] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:14.227 [2024-04-24 21:35:28.988132] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:14.227 request: 00:30:14.227 { 00:30:14.227 "name": "nvme0", 00:30:14.227 "trtype": "tcp", 00:30:14.227 "traddr": "127.0.0.1", 00:30:14.227 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:14.227 "adrfam": "ipv4", 00:30:14.227 "trsvcid": "4420", 00:30:14.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:14.227 "psk": "key1", 00:30:14.227 "method": "bdev_nvme_attach_controller", 00:30:14.227 "req_id": 1 00:30:14.227 } 00:30:14.227 Got JSON-RPC error response 00:30:14.227 response: 00:30:14.227 { 00:30:14.227 "code": -32602, 00:30:14.227 "message": "Invalid parameters" 00:30:14.227 } 00:30:14.227 21:35:29 -- common/autotest_common.sh@641 -- # es=1 00:30:14.227 21:35:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:14.227 21:35:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:14.227 21:35:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:14.227 21:35:29 -- keyring/file.sh@71 -- # get_refcnt key0 00:30:14.227 21:35:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:14.227 21:35:29 -- keyring/common.sh@12 -- # get_key key0 00:30:14.227 21:35:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:14.227 21:35:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.227 21:35:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.227 21:35:29 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:14.227 21:35:29 -- keyring/file.sh@72 -- # get_refcnt key1 00:30:14.227 21:35:29 -- keyring/common.sh@12 -- # get_key key1 00:30:14.227 21:35:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:14.227 21:35:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:14.227 21:35:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.227 21:35:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.486 21:35:29 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:14.486 21:35:29 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:14.486 21:35:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:14.486 21:35:29 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:14.486 21:35:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:14.745 21:35:29 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:14.745 21:35:29 -- keyring/file.sh@77 -- # jq length 00:30:14.745 21:35:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.745 21:35:29 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:14.745 21:35:29 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Zlus2KR2U2 00:30:14.745 21:35:29 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zlus2KR2U2 00:30:14.745 21:35:29 -- common/autotest_common.sh@638 -- # local es=0 00:30:14.745 21:35:29 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zlus2KR2U2 00:30:14.745 21:35:29 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:30:14.745 21:35:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:14.745 21:35:29 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:30:14.745 21:35:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:14.745 21:35:29 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zlus2KR2U2 00:30:14.745 21:35:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zlus2KR2U2 00:30:15.004 [2024-04-24 21:35:29.814120] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Zlus2KR2U2': 0100660 00:30:15.004 [2024-04-24 21:35:29.814153] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:15.004 request: 00:30:15.004 { 00:30:15.004 "name": "key0", 00:30:15.004 "path": "/tmp/tmp.Zlus2KR2U2", 00:30:15.004 "method": "keyring_file_add_key", 00:30:15.004 "req_id": 1 00:30:15.004 } 00:30:15.004 Got JSON-RPC error response 00:30:15.004 response: 00:30:15.004 { 00:30:15.004 "code": -1, 00:30:15.004 "message": "Operation not permitted" 00:30:15.004 } 00:30:15.004 21:35:29 -- common/autotest_common.sh@641 -- # es=1 00:30:15.004 21:35:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:15.004 21:35:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:15.004 21:35:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:15.004 21:35:29 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Zlus2KR2U2 00:30:15.004 21:35:29 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zlus2KR2U2 00:30:15.004 21:35:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zlus2KR2U2 00:30:15.265 21:35:29 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Zlus2KR2U2 00:30:15.265 21:35:29 -- keyring/file.sh@88 -- # get_refcnt key0 00:30:15.265 21:35:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:15.265 21:35:29 -- keyring/common.sh@12 -- # get_key key0 00:30:15.265 21:35:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:15.265 21:35:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:15.265 21:35:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:15.265 21:35:30 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:15.265 21:35:30 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:15.265 21:35:30 -- common/autotest_common.sh@638 -- # local es=0 00:30:15.265 21:35:30 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:15.265 21:35:30 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:30:15.265 21:35:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:15.265 21:35:30 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:30:15.265 21:35:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:15.265 21:35:30 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:15.265 21:35:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:15.533 [2024-04-24 21:35:30.250313] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Zlus2KR2U2': No such file or directory 00:30:15.533 [2024-04-24 21:35:30.250361] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:15.533 [2024-04-24 21:35:30.250390] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:15.533 [2024-04-24 21:35:30.250402] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:15.533 [2024-04-24 21:35:30.250413] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:15.533 request: 00:30:15.533 { 00:30:15.533 "name": "nvme0", 00:30:15.533 "trtype": "tcp", 00:30:15.533 "traddr": "127.0.0.1", 00:30:15.533 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:15.533 "adrfam": "ipv4", 00:30:15.533 "trsvcid": "4420", 00:30:15.533 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:15.533 "psk": "key0", 00:30:15.533 "method": "bdev_nvme_attach_controller", 00:30:15.533 "req_id": 1 00:30:15.533 } 00:30:15.533 Got JSON-RPC error response 00:30:15.533 response: 00:30:15.533 { 00:30:15.533 "code": -19, 00:30:15.533 "message": "No such device" 00:30:15.533 } 00:30:15.533 21:35:30 -- common/autotest_common.sh@641 -- # es=1 00:30:15.533 21:35:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:15.533 21:35:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:15.533 21:35:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:15.533 21:35:30 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:15.533 21:35:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:15.533 21:35:30 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:15.533 21:35:30 -- keyring/common.sh@15 -- # local name key digest path 00:30:15.533 21:35:30 -- keyring/common.sh@17 -- # name=key0 00:30:15.533 21:35:30 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:15.533 21:35:30 -- keyring/common.sh@17 -- # digest=0 00:30:15.533 21:35:30 -- keyring/common.sh@18 -- # mktemp 00:30:15.533 21:35:30 -- keyring/common.sh@18 -- # path=/tmp/tmp.Oxh28IQS5s 00:30:15.533 21:35:30 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:15.533 21:35:30 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:15.533 21:35:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:15.533 21:35:30 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:30:15.533 21:35:30 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:30:15.533 21:35:30 -- nvmf/common.sh@693 -- # digest=0 00:30:15.534 21:35:30 -- nvmf/common.sh@694 -- # python - 00:30:15.534 21:35:30 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Oxh28IQS5s 00:30:15.534 21:35:30 -- keyring/common.sh@23 -- # echo /tmp/tmp.Oxh28IQS5s 00:30:15.534 21:35:30 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Oxh28IQS5s 00:30:15.534 21:35:30 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Oxh28IQS5s 00:30:15.534 21:35:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Oxh28IQS5s 00:30:15.795 21:35:30 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:15.795 21:35:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:16.054 nvme0n1 00:30:16.054 21:35:30 -- keyring/file.sh@99 -- # get_refcnt key0 00:30:16.054 21:35:30 -- keyring/common.sh@12 -- # get_key key0 00:30:16.054 21:35:30 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:16.054 21:35:30 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:16.054 21:35:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:16.054 21:35:30 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:16.313 21:35:31 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:16.313 21:35:31 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:16.313 21:35:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:16.313 21:35:31 -- keyring/file.sh@101 -- # get_key key0 00:30:16.313 21:35:31 -- keyring/file.sh@101 -- # jq -r .removed 00:30:16.313 21:35:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:16.313 21:35:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:16.313 21:35:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:16.570 21:35:31 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:16.570 21:35:31 -- keyring/file.sh@102 -- # get_refcnt key0 00:30:16.570 21:35:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:16.570 21:35:31 -- keyring/common.sh@12 -- # get_key key0 00:30:16.570 21:35:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:16.570 21:35:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:16.570 21:35:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:16.570 21:35:31 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:16.570 21:35:31 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:16.570 21:35:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:16.828 21:35:31 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:16.828 21:35:31 -- keyring/file.sh@104 -- # jq length 00:30:16.828 21:35:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:16.828 21:35:31 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:16.828 21:35:31 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Oxh28IQS5s 00:30:16.828 21:35:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Oxh28IQS5s 00:30:17.086 21:35:31 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5V2JAp0U0m 00:30:17.086 21:35:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5V2JAp0U0m 00:30:17.086 21:35:32 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:17.086 21:35:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:17.344 nvme0n1 00:30:17.344 21:35:32 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:17.344 21:35:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:17.604 21:35:32 -- keyring/file.sh@112 -- # config='{ 00:30:17.604 "subsystems": [ 00:30:17.604 { 00:30:17.604 "subsystem": "keyring", 00:30:17.604 "config": [ 00:30:17.604 { 00:30:17.604 "method": "keyring_file_add_key", 00:30:17.604 "params": { 00:30:17.604 "name": "key0", 00:30:17.604 "path": "/tmp/tmp.Oxh28IQS5s" 00:30:17.604 } 00:30:17.604 }, 00:30:17.604 { 00:30:17.604 "method": "keyring_file_add_key", 00:30:17.604 "params": { 00:30:17.604 "name": "key1", 00:30:17.604 "path": "/tmp/tmp.5V2JAp0U0m" 00:30:17.604 } 00:30:17.604 } 00:30:17.604 ] 00:30:17.604 }, 00:30:17.604 { 00:30:17.604 "subsystem": "iobuf", 00:30:17.604 "config": [ 00:30:17.604 { 00:30:17.604 "method": "iobuf_set_options", 00:30:17.604 "params": { 00:30:17.604 "small_pool_count": 8192, 00:30:17.604 "large_pool_count": 1024, 00:30:17.604 "small_bufsize": 8192, 00:30:17.604 "large_bufsize": 135168 00:30:17.604 } 00:30:17.604 } 00:30:17.604 ] 00:30:17.604 }, 00:30:17.604 { 00:30:17.604 "subsystem": "sock", 00:30:17.604 "config": [ 00:30:17.604 { 00:30:17.604 "method": "sock_impl_set_options", 00:30:17.604 "params": { 00:30:17.604 "impl_name": "posix", 00:30:17.604 "recv_buf_size": 2097152, 00:30:17.604 "send_buf_size": 2097152, 00:30:17.604 "enable_recv_pipe": true, 00:30:17.604 "enable_quickack": false, 00:30:17.604 "enable_placement_id": 0, 00:30:17.604 "enable_zerocopy_send_server": true, 00:30:17.604 "enable_zerocopy_send_client": false, 00:30:17.604 "zerocopy_threshold": 0, 00:30:17.604 "tls_version": 0, 00:30:17.604 "enable_ktls": false 00:30:17.604 } 00:30:17.604 }, 00:30:17.604 { 00:30:17.604 "method": "sock_impl_set_options", 00:30:17.604 "params": { 00:30:17.604 "impl_name": "ssl", 00:30:17.604 "recv_buf_size": 4096, 00:30:17.604 "send_buf_size": 4096, 00:30:17.604 "enable_recv_pipe": true, 00:30:17.604 "enable_quickack": false, 00:30:17.604 "enable_placement_id": 0, 00:30:17.604 "enable_zerocopy_send_server": true, 00:30:17.604 "enable_zerocopy_send_client": false, 00:30:17.604 "zerocopy_threshold": 0, 00:30:17.604 "tls_version": 0, 00:30:17.604 "enable_ktls": false 00:30:17.604 } 00:30:17.604 } 00:30:17.604 ] 00:30:17.604 }, 00:30:17.604 { 00:30:17.604 "subsystem": "vmd", 00:30:17.604 "config": [] 00:30:17.604 }, 00:30:17.604 { 00:30:17.604 "subsystem": "accel", 00:30:17.604 "config": [ 00:30:17.604 { 00:30:17.604 "method": "accel_set_options", 00:30:17.604 "params": { 00:30:17.604 "small_cache_size": 128, 00:30:17.604 "large_cache_size": 16, 00:30:17.604 "task_count": 2048, 00:30:17.604 "sequence_count": 2048, 00:30:17.604 "buf_count": 2048 00:30:17.604 } 00:30:17.604 } 00:30:17.604 ] 00:30:17.604 }, 00:30:17.604 { 00:30:17.604 "subsystem": "bdev", 00:30:17.604 "config": [ 00:30:17.604 { 00:30:17.604 "method": "bdev_set_options", 00:30:17.604 "params": { 00:30:17.604 "bdev_io_pool_size": 65535, 00:30:17.604 "bdev_io_cache_size": 256, 00:30:17.604 "bdev_auto_examine": true, 00:30:17.604 "iobuf_small_cache_size": 128, 00:30:17.604 "iobuf_large_cache_size": 16 00:30:17.604 } 00:30:17.604 }, 00:30:17.604 { 00:30:17.604 "method": "bdev_raid_set_options", 00:30:17.604 "params": { 00:30:17.604 "process_window_size_kb": 1024 00:30:17.604 } 00:30:17.604 }, 00:30:17.604 { 00:30:17.604 "method": "bdev_iscsi_set_options", 00:30:17.604 "params": { 00:30:17.604 "timeout_sec": 30 00:30:17.605 } 00:30:17.605 }, 00:30:17.605 { 00:30:17.605 "method": "bdev_nvme_set_options", 00:30:17.605 "params": { 00:30:17.605 "action_on_timeout": "none", 00:30:17.605 "timeout_us": 0, 00:30:17.605 "timeout_admin_us": 0, 00:30:17.605 "keep_alive_timeout_ms": 10000, 00:30:17.605 "arbitration_burst": 0, 00:30:17.605 "low_priority_weight": 0, 00:30:17.605 "medium_priority_weight": 0, 00:30:17.605 "high_priority_weight": 0, 00:30:17.605 "nvme_adminq_poll_period_us": 10000, 00:30:17.605 "nvme_ioq_poll_period_us": 0, 00:30:17.605 "io_queue_requests": 512, 00:30:17.605 "delay_cmd_submit": true, 00:30:17.605 "transport_retry_count": 4, 00:30:17.605 "bdev_retry_count": 3, 00:30:17.605 "transport_ack_timeout": 0, 00:30:17.605 "ctrlr_loss_timeout_sec": 0, 00:30:17.605 "reconnect_delay_sec": 0, 00:30:17.605 "fast_io_fail_timeout_sec": 0, 00:30:17.605 "disable_auto_failback": false, 00:30:17.605 "generate_uuids": false, 00:30:17.605 "transport_tos": 0, 00:30:17.605 "nvme_error_stat": false, 00:30:17.605 "rdma_srq_size": 0, 00:30:17.605 "io_path_stat": false, 00:30:17.605 "allow_accel_sequence": false, 00:30:17.605 "rdma_max_cq_size": 0, 00:30:17.605 "rdma_cm_event_timeout_ms": 0, 00:30:17.605 "dhchap_digests": [ 00:30:17.605 "sha256", 00:30:17.605 "sha384", 00:30:17.605 "sha512" 00:30:17.605 ], 00:30:17.605 "dhchap_dhgroups": [ 00:30:17.605 "null", 00:30:17.605 "ffdhe2048", 00:30:17.605 "ffdhe3072", 00:30:17.605 "ffdhe4096", 00:30:17.605 "ffdhe6144", 00:30:17.605 "ffdhe8192" 00:30:17.605 ] 00:30:17.605 } 00:30:17.605 }, 00:30:17.605 { 00:30:17.605 "method": "bdev_nvme_attach_controller", 00:30:17.605 "params": { 00:30:17.605 "name": "nvme0", 00:30:17.605 "trtype": "TCP", 00:30:17.605 "adrfam": "IPv4", 00:30:17.605 "traddr": "127.0.0.1", 00:30:17.605 "trsvcid": "4420", 00:30:17.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:17.605 "prchk_reftag": false, 00:30:17.605 "prchk_guard": false, 00:30:17.605 "ctrlr_loss_timeout_sec": 0, 00:30:17.605 "reconnect_delay_sec": 0, 00:30:17.605 "fast_io_fail_timeout_sec": 0, 00:30:17.605 "psk": "key0", 00:30:17.605 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:17.605 "hdgst": false, 00:30:17.605 "ddgst": false 00:30:17.605 } 00:30:17.605 }, 00:30:17.605 { 00:30:17.605 "method": "bdev_nvme_set_hotplug", 00:30:17.605 "params": { 00:30:17.605 "period_us": 100000, 00:30:17.605 "enable": false 00:30:17.605 } 00:30:17.605 }, 00:30:17.605 { 00:30:17.605 "method": "bdev_wait_for_examine" 00:30:17.605 } 00:30:17.605 ] 00:30:17.605 }, 00:30:17.605 { 00:30:17.605 "subsystem": "nbd", 00:30:17.605 "config": [] 00:30:17.605 } 00:30:17.605 ] 00:30:17.605 }' 00:30:17.605 21:35:32 -- keyring/file.sh@114 -- # killprocess 1420785 00:30:17.605 21:35:32 -- common/autotest_common.sh@936 -- # '[' -z 1420785 ']' 00:30:17.605 21:35:32 -- common/autotest_common.sh@940 -- # kill -0 1420785 00:30:17.605 21:35:32 -- common/autotest_common.sh@941 -- # uname 00:30:17.605 21:35:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:17.605 21:35:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1420785 00:30:17.605 21:35:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:17.605 21:35:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:17.605 21:35:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1420785' 00:30:17.605 killing process with pid 1420785 00:30:17.605 21:35:32 -- common/autotest_common.sh@955 -- # kill 1420785 00:30:17.605 Received shutdown signal, test time was about 1.000000 seconds 00:30:17.605 00:30:17.605 Latency(us) 00:30:17.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.605 =================================================================================================================== 00:30:17.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:17.605 21:35:32 -- common/autotest_common.sh@960 -- # wait 1420785 00:30:18.174 21:35:32 -- keyring/file.sh@117 -- # bperfpid=1422436 00:30:18.174 21:35:32 -- keyring/file.sh@119 -- # waitforlisten 1422436 /var/tmp/bperf.sock 00:30:18.174 21:35:32 -- common/autotest_common.sh@817 -- # '[' -z 1422436 ']' 00:30:18.174 21:35:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:18.174 21:35:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:18.174 21:35:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:18.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:18.174 21:35:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:18.174 21:35:32 -- common/autotest_common.sh@10 -- # set +x 00:30:18.174 21:35:32 -- keyring/file.sh@115 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:18.174 21:35:32 -- keyring/file.sh@115 -- # echo '{ 00:30:18.174 "subsystems": [ 00:30:18.174 { 00:30:18.174 "subsystem": "keyring", 00:30:18.174 "config": [ 00:30:18.174 { 00:30:18.174 "method": "keyring_file_add_key", 00:30:18.174 "params": { 00:30:18.174 "name": "key0", 00:30:18.174 "path": "/tmp/tmp.Oxh28IQS5s" 00:30:18.174 } 00:30:18.174 }, 00:30:18.174 { 00:30:18.174 "method": "keyring_file_add_key", 00:30:18.174 "params": { 00:30:18.174 "name": "key1", 00:30:18.174 "path": "/tmp/tmp.5V2JAp0U0m" 00:30:18.174 } 00:30:18.174 } 00:30:18.174 ] 00:30:18.174 }, 00:30:18.174 { 00:30:18.174 "subsystem": "iobuf", 00:30:18.174 "config": [ 00:30:18.174 { 00:30:18.174 "method": "iobuf_set_options", 00:30:18.174 "params": { 00:30:18.174 "small_pool_count": 8192, 00:30:18.174 "large_pool_count": 1024, 00:30:18.174 "small_bufsize": 8192, 00:30:18.174 "large_bufsize": 135168 00:30:18.174 } 00:30:18.174 } 00:30:18.174 ] 00:30:18.174 }, 00:30:18.174 { 00:30:18.174 "subsystem": "sock", 00:30:18.174 "config": [ 00:30:18.174 { 00:30:18.174 "method": "sock_impl_set_options", 00:30:18.174 "params": { 00:30:18.174 "impl_name": "posix", 00:30:18.174 "recv_buf_size": 2097152, 00:30:18.174 "send_buf_size": 2097152, 00:30:18.174 "enable_recv_pipe": true, 00:30:18.174 "enable_quickack": false, 00:30:18.174 "enable_placement_id": 0, 00:30:18.174 "enable_zerocopy_send_server": true, 00:30:18.174 "enable_zerocopy_send_client": false, 00:30:18.174 "zerocopy_threshold": 0, 00:30:18.174 "tls_version": 0, 00:30:18.174 "enable_ktls": false 00:30:18.174 } 00:30:18.174 }, 00:30:18.174 { 00:30:18.174 "method": "sock_impl_set_options", 00:30:18.174 "params": { 00:30:18.174 "impl_name": "ssl", 00:30:18.174 "recv_buf_size": 4096, 00:30:18.174 "send_buf_size": 4096, 00:30:18.174 "enable_recv_pipe": true, 00:30:18.174 "enable_quickack": false, 00:30:18.174 "enable_placement_id": 0, 00:30:18.174 "enable_zerocopy_send_server": true, 00:30:18.174 "enable_zerocopy_send_client": false, 00:30:18.174 "zerocopy_threshold": 0, 00:30:18.174 "tls_version": 0, 00:30:18.174 "enable_ktls": false 00:30:18.174 } 00:30:18.174 } 00:30:18.174 ] 00:30:18.174 }, 00:30:18.174 { 00:30:18.174 "subsystem": "vmd", 00:30:18.174 "config": [] 00:30:18.174 }, 00:30:18.174 { 00:30:18.174 "subsystem": "accel", 00:30:18.174 "config": [ 00:30:18.174 { 00:30:18.174 "method": "accel_set_options", 00:30:18.174 "params": { 00:30:18.174 "small_cache_size": 128, 00:30:18.174 "large_cache_size": 16, 00:30:18.174 "task_count": 2048, 00:30:18.174 "sequence_count": 2048, 00:30:18.174 "buf_count": 2048 00:30:18.174 } 00:30:18.174 } 00:30:18.174 ] 00:30:18.174 }, 00:30:18.174 { 00:30:18.174 "subsystem": "bdev", 00:30:18.174 "config": [ 00:30:18.174 { 00:30:18.174 "method": "bdev_set_options", 00:30:18.174 "params": { 00:30:18.174 "bdev_io_pool_size": 65535, 00:30:18.174 "bdev_io_cache_size": 256, 00:30:18.174 "bdev_auto_examine": true, 00:30:18.174 "iobuf_small_cache_size": 128, 00:30:18.174 "iobuf_large_cache_size": 16 00:30:18.174 } 00:30:18.174 }, 00:30:18.174 { 00:30:18.174 "method": "bdev_raid_set_options", 00:30:18.174 "params": { 00:30:18.174 "process_window_size_kb": 1024 00:30:18.174 } 00:30:18.174 }, 00:30:18.174 { 00:30:18.174 "method": "bdev_iscsi_set_options", 00:30:18.174 "params": { 00:30:18.174 "timeout_sec": 30 00:30:18.174 } 00:30:18.174 }, 00:30:18.174 { 00:30:18.174 "method": "bdev_nvme_set_options", 00:30:18.174 "params": { 00:30:18.174 "action_on_timeout": "none", 00:30:18.174 "timeout_us": 0, 00:30:18.174 "timeout_admin_us": 0, 00:30:18.174 "keep_alive_timeout_ms": 10000, 00:30:18.174 "arbitration_burst": 0, 00:30:18.174 "low_priority_weight": 0, 00:30:18.174 "medium_priority_weight": 0, 00:30:18.174 "high_priority_weight": 0, 00:30:18.174 "nvme_adminq_poll_period_us": 10000, 00:30:18.174 "nvme_ioq_poll_period_us": 0, 00:30:18.174 "io_queue_requests": 512, 00:30:18.174 "delay_cmd_submit": true, 00:30:18.174 "transport_retry_count": 4, 00:30:18.174 "bdev_retry_count": 3, 00:30:18.174 "transport_ack_timeout": 0, 00:30:18.174 "ctrlr_loss_timeout_sec": 0, 00:30:18.174 "reconnect_delay_sec": 0, 00:30:18.174 "fast_io_fail_timeout_sec": 0, 00:30:18.174 "disable_auto_failback": false, 00:30:18.174 "generate_uuids": false, 00:30:18.174 "transport_tos": 0, 00:30:18.174 "nvme_error_stat": false, 00:30:18.174 "rdma_srq_size": 0, 00:30:18.174 "io_path_stat": false, 00:30:18.174 "allow_accel_sequence": false, 00:30:18.174 "rdma_max_cq_size": 0, 00:30:18.174 "rdma_cm_event_timeout_ms": 0, 00:30:18.174 "dhchap_digests": [ 00:30:18.174 "sha256", 00:30:18.174 "sha384", 00:30:18.174 "sha512" 00:30:18.174 ], 00:30:18.174 "dhchap_dhgroups": [ 00:30:18.174 "null", 00:30:18.174 "ffdhe2048", 00:30:18.174 "ffdhe3072", 00:30:18.174 "ffdhe4096", 00:30:18.174 "ffdhe6144", 00:30:18.174 "ffdhe8192" 00:30:18.174 ] 00:30:18.174 } 00:30:18.174 }, 00:30:18.174 { 00:30:18.175 "method": "bdev_nvme_attach_controller", 00:30:18.175 "params": { 00:30:18.175 "name": "nvme0", 00:30:18.175 "trtype": "TCP", 00:30:18.175 "adrfam": "IPv4", 00:30:18.175 "traddr": "127.0.0.1", 00:30:18.175 "trsvcid": "4420", 00:30:18.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:18.175 "prchk_reftag": false, 00:30:18.175 "prchk_guard": false, 00:30:18.175 "ctrlr_loss_timeout_sec": 0, 00:30:18.175 "reconnect_delay_sec": 0, 00:30:18.175 "fast_io_fail_timeout_sec": 0, 00:30:18.175 "psk": "key0", 00:30:18.175 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:18.175 "hdgst": false, 00:30:18.175 "ddgst": false 00:30:18.175 } 00:30:18.175 }, 00:30:18.175 { 00:30:18.175 "method": "bdev_nvme_set_hotplug", 00:30:18.175 "params": { 00:30:18.175 "period_us": 100000, 00:30:18.175 "enable": false 00:30:18.175 } 00:30:18.175 }, 00:30:18.175 { 00:30:18.175 "method": "bdev_wait_for_examine" 00:30:18.175 } 00:30:18.175 ] 00:30:18.175 }, 00:30:18.175 { 00:30:18.175 "subsystem": "nbd", 00:30:18.175 "config": [] 00:30:18.175 } 00:30:18.175 ] 00:30:18.175 }' 00:30:18.175 [2024-04-24 21:35:32.928399] Starting SPDK v24.05-pre git sha1 ea150257d / DPDK 23.11.0 initialization... 00:30:18.175 [2024-04-24 21:35:32.928521] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422436 ] 00:30:18.175 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.175 [2024-04-24 21:35:33.042679] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.175 [2024-04-24 21:35:33.132130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.433 [2024-04-24 21:35:33.344882] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:18.690 21:35:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:18.690 21:35:33 -- common/autotest_common.sh@850 -- # return 0 00:30:18.690 21:35:33 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:18.690 21:35:33 -- keyring/file.sh@120 -- # jq length 00:30:18.690 21:35:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.948 21:35:33 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:18.948 21:35:33 -- keyring/file.sh@121 -- # get_refcnt key0 00:30:18.948 21:35:33 -- keyring/common.sh@12 -- # get_key key0 00:30:18.948 21:35:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:18.948 21:35:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.948 21:35:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:18.948 21:35:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.206 21:35:33 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:19.206 21:35:33 -- keyring/file.sh@122 -- # get_refcnt key1 00:30:19.206 21:35:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:19.206 21:35:33 -- keyring/common.sh@12 -- # get_key key1 00:30:19.206 21:35:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:19.206 21:35:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:19.206 21:35:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.206 21:35:34 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:19.206 21:35:34 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:19.206 21:35:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:19.206 21:35:34 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:19.466 21:35:34 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:19.466 21:35:34 -- keyring/file.sh@1 -- # cleanup 00:30:19.466 21:35:34 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Oxh28IQS5s /tmp/tmp.5V2JAp0U0m 00:30:19.466 21:35:34 -- keyring/file.sh@20 -- # killprocess 1422436 00:30:19.466 21:35:34 -- common/autotest_common.sh@936 -- # '[' -z 1422436 ']' 00:30:19.466 21:35:34 -- common/autotest_common.sh@940 -- # kill -0 1422436 00:30:19.466 21:35:34 -- common/autotest_common.sh@941 -- # uname 00:30:19.466 21:35:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:19.466 21:35:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1422436 00:30:19.466 21:35:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:19.466 21:35:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:19.466 21:35:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1422436' 00:30:19.466 killing process with pid 1422436 00:30:19.466 21:35:34 -- common/autotest_common.sh@955 -- # kill 1422436 00:30:19.466 Received shutdown signal, test time was about 1.000000 seconds 00:30:19.466 00:30:19.466 Latency(us) 00:30:19.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.466 =================================================================================================================== 00:30:19.466 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:19.466 21:35:34 -- common/autotest_common.sh@960 -- # wait 1422436 00:30:19.727 21:35:34 -- keyring/file.sh@21 -- # killprocess 1420513 00:30:19.727 21:35:34 -- common/autotest_common.sh@936 -- # '[' -z 1420513 ']' 00:30:19.727 21:35:34 -- common/autotest_common.sh@940 -- # kill -0 1420513 00:30:19.727 21:35:34 -- common/autotest_common.sh@941 -- # uname 00:30:19.727 21:35:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:19.727 21:35:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1420513 00:30:19.727 21:35:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:19.727 21:35:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:19.727 21:35:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1420513' 00:30:19.727 killing process with pid 1420513 00:30:19.727 21:35:34 -- common/autotest_common.sh@955 -- # kill 1420513 00:30:19.727 [2024-04-24 21:35:34.679515] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:19.727 21:35:34 -- common/autotest_common.sh@960 -- # wait 1420513 00:30:20.667 00:30:20.667 real 0m11.408s 00:30:20.667 user 0m24.860s 00:30:20.667 sys 0m2.656s 00:30:20.667 21:35:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:20.667 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:30:20.667 ************************************ 00:30:20.667 END TEST keyring_file 00:30:20.667 ************************************ 00:30:20.667 21:35:35 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:30:20.667 21:35:35 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:30:20.667 21:35:35 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:30:20.667 21:35:35 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:30:20.667 21:35:35 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:30:20.667 21:35:35 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:30:20.667 21:35:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:30:20.667 21:35:35 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:30:20.667 21:35:35 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:30:20.667 21:35:35 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:30:20.667 21:35:35 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:30:20.667 21:35:35 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:30:20.667 21:35:35 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:30:20.667 21:35:35 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:30:20.667 21:35:35 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:30:20.667 21:35:35 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:30:20.667 21:35:35 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:30:20.667 21:35:35 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:30:20.667 21:35:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:20.667 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:30:20.667 21:35:35 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:30:20.667 21:35:35 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:30:20.667 21:35:35 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:30:20.667 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:30:25.943 INFO: APP EXITING 00:30:25.943 INFO: killing all VMs 00:30:25.943 INFO: killing vhost app 00:30:25.943 INFO: EXIT DONE 00:30:28.603 0000:c9:00.0 (8086 0a54): Already using the nvme driver 00:30:28.603 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:30:28.603 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:30:28.603 0000:cb:00.0 (8086 0a54): Already using the nvme driver 00:30:28.603 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:30:28.603 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:30:28.603 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:30:28.603 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:30:28.603 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:30:28.603 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:30:28.603 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:30:28.603 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:30:28.603 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:30:28.603 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:30:28.603 0000:ca:00.0 (8086 0a54): Already using the nvme driver 00:30:28.603 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:30:28.603 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:30:28.603 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:30:28.603 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:30:31.891 Cleaning 00:30:31.891 Removing: /var/run/dpdk/spdk0/config 00:30:31.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:31.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:31.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:31.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:31.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:31.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:31.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:31.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:31.891 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:31.891 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:31.891 Removing: /var/run/dpdk/spdk1/config 00:30:31.891 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:31.891 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:31.891 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:31.891 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:31.891 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:31.891 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:31.891 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:31.892 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:31.892 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:31.892 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:31.892 Removing: /var/run/dpdk/spdk2/config 00:30:31.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:31.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:31.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:31.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:31.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:31.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:31.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:31.892 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:31.892 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:31.892 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:31.892 Removing: /var/run/dpdk/spdk3/config 00:30:31.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:31.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:31.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:31.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:31.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:31.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:31.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:31.892 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:31.892 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:31.892 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:31.892 Removing: /var/run/dpdk/spdk4/config 00:30:31.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:31.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:31.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:31.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:31.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:31.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:31.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:31.892 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:31.892 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:31.892 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:31.892 Removing: /dev/shm/nvmf_trace.0 00:30:31.892 Removing: /dev/shm/spdk_tgt_trace.pid1007177 00:30:31.892 Removing: /var/run/dpdk/spdk0 00:30:31.892 Removing: /var/run/dpdk/spdk1 00:30:31.892 Removing: /var/run/dpdk/spdk2 00:30:31.892 Removing: /var/run/dpdk/spdk3 00:30:31.892 Removing: /var/run/dpdk/spdk4 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1000939 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1003751 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1007177 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1008002 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1009219 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1009664 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1010963 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1011098 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1011525 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1015529 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1018354 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1019000 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1019444 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1020033 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1020417 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1020748 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1021073 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1021429 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1022095 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1025609 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1025968 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1026316 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1026608 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1027344 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1027649 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1028532 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1028801 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1029375 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1029669 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1030011 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1030033 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1030982 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1031283 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1031711 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1034341 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1036016 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1037874 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1039914 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1041798 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1043683 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1045698 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1047533 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1049357 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1051414 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1053272 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1055097 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1057065 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1059014 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1060844 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1062934 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1065084 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1067337 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1069203 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1071043 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1073021 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1074950 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1076819 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1079420 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1082334 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1086666 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1139280 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1144396 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1154739 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1160785 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1165288 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1166051 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1177728 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1178047 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1183180 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1189828 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1192623 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1204719 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1215143 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1217292 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1218358 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1238480 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1242974 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1248122 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1250091 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1252294 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1252606 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1252840 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1253024 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1253874 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1255972 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1257242 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1257881 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1260584 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1261247 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1262169 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1267301 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1273648 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1278591 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1287796 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1287799 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1294277 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1294577 00:30:31.892 Removing: /var/run/dpdk/spdk_pid1294745 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1295334 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1295346 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1300754 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1301490 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1306722 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1310007 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1316338 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1322529 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1331101 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1331172 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1353189 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1355578 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1357976 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1360361 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1364583 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1365273 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1366099 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1366870 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1368287 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1369173 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1369791 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1370688 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1372040 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1381508 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1381527 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1388268 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1390805 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1393326 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1394698 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1397334 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1398885 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1409798 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1410392 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1410989 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1414685 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1415304 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1415878 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1420513 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1420785 00:30:32.152 Removing: /var/run/dpdk/spdk_pid1422436 00:30:32.152 Clean 00:30:32.152 21:35:47 -- common/autotest_common.sh@1437 -- # return 0 00:30:32.152 21:35:47 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:30:32.152 21:35:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:32.152 21:35:47 -- common/autotest_common.sh@10 -- # set +x 00:30:32.152 21:35:47 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:30:32.152 21:35:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:32.152 21:35:47 -- common/autotest_common.sh@10 -- # set +x 00:30:32.413 21:35:47 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:30:32.413 21:35:47 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log ]] 00:30:32.413 21:35:47 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log 00:30:32.413 21:35:47 -- spdk/autotest.sh@389 -- # hash lcov 00:30:32.413 21:35:47 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:32.413 21:35:47 -- spdk/autotest.sh@391 -- # hostname 00:30:32.413 21:35:47 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/dsa-phy-autotest/spdk -t spdk-fcp-11 -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info 00:30:32.413 geninfo: WARNING: invalid characters removed from testname! 00:30:54.377 21:36:08 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:30:55.311 21:36:10 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:30:56.686 21:36:11 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:30:58.060 21:36:12 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:30:58.994 21:36:13 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:31:00.370 21:36:15 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:31:01.745 21:36:16 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:01.745 21:36:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:31:01.745 21:36:16 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:01.745 21:36:16 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.746 21:36:16 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.746 21:36:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.746 21:36:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.746 21:36:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.746 21:36:16 -- paths/export.sh@5 -- $ export PATH 00:31:01.746 21:36:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.746 21:36:16 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:31:01.746 21:36:16 -- common/autobuild_common.sh@435 -- $ date +%s 00:31:01.746 21:36:16 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713987376.XXXXXX 00:31:01.746 21:36:16 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713987376.6Zs3KG 00:31:01.746 21:36:16 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:31:01.746 21:36:16 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:31:01.746 21:36:16 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:31:01.746 21:36:16 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:01.746 21:36:16 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:01.746 21:36:16 -- common/autobuild_common.sh@451 -- $ get_config_params 00:31:01.746 21:36:16 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:31:01.746 21:36:16 -- common/autotest_common.sh@10 -- $ set +x 00:31:01.746 21:36:16 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:31:01.746 21:36:16 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:31:01.746 21:36:16 -- pm/common@17 -- $ local monitor 00:31:01.746 21:36:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:01.746 21:36:16 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1434061 00:31:01.746 21:36:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:01.746 21:36:16 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1434063 00:31:01.746 21:36:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:01.746 21:36:16 -- pm/common@21 -- $ date +%s 00:31:01.746 21:36:16 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1434065 00:31:01.746 21:36:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:01.746 21:36:16 -- pm/common@21 -- $ date +%s 00:31:01.746 21:36:16 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1434067 00:31:01.746 21:36:16 -- pm/common@26 -- $ sleep 1 00:31:01.746 21:36:16 -- pm/common@21 -- $ date +%s 00:31:01.746 21:36:16 -- pm/common@21 -- $ date +%s 00:31:01.746 21:36:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713987376 00:31:01.746 21:36:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713987376 00:31:01.746 21:36:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713987376 00:31:01.746 21:36:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713987376 00:31:01.746 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713987376_collect-vmstat.pm.log 00:31:01.746 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713987376_collect-bmc-pm.bmc.pm.log 00:31:01.746 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713987376_collect-cpu-load.pm.log 00:31:01.746 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713987376_collect-cpu-temp.pm.log 00:31:02.684 21:36:17 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:31:02.684 21:36:17 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j128 00:31:02.684 21:36:17 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:31:02.684 21:36:17 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:02.684 21:36:17 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:02.684 21:36:17 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:02.684 21:36:17 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:02.684 21:36:17 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:02.684 21:36:17 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:31:02.684 21:36:17 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:02.684 21:36:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:02.684 21:36:17 -- pm/common@30 -- $ signal_monitor_resources TERM 00:31:02.684 21:36:17 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:31:02.684 21:36:17 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:02.684 21:36:17 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:02.684 21:36:17 -- pm/common@45 -- $ pid=1434075 00:31:02.684 21:36:17 -- pm/common@52 -- $ sudo kill -TERM 1434075 00:31:02.684 21:36:17 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:02.684 21:36:17 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:02.684 21:36:17 -- pm/common@45 -- $ pid=1434074 00:31:02.684 21:36:17 -- pm/common@52 -- $ sudo kill -TERM 1434074 00:31:02.684 21:36:17 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:02.684 21:36:17 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:02.684 21:36:17 -- pm/common@45 -- $ pid=1434078 00:31:02.684 21:36:17 -- pm/common@52 -- $ sudo kill -TERM 1434078 00:31:02.944 21:36:17 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:02.944 21:36:17 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:02.944 21:36:17 -- pm/common@45 -- $ pid=1434079 00:31:02.944 21:36:17 -- pm/common@52 -- $ sudo kill -TERM 1434079 00:31:02.944 + [[ -n 886486 ]] 00:31:02.944 + sudo kill 886486 00:31:02.954 [Pipeline] } 00:31:02.971 [Pipeline] // stage 00:31:02.975 [Pipeline] } 00:31:02.992 [Pipeline] // timeout 00:31:02.997 [Pipeline] } 00:31:03.016 [Pipeline] // catchError 00:31:03.021 [Pipeline] } 00:31:03.039 [Pipeline] // wrap 00:31:03.045 [Pipeline] } 00:31:03.059 [Pipeline] // catchError 00:31:03.068 [Pipeline] stage 00:31:03.070 [Pipeline] { (Epilogue) 00:31:03.083 [Pipeline] catchError 00:31:03.085 [Pipeline] { 00:31:03.099 [Pipeline] echo 00:31:03.100 Cleanup processes 00:31:03.106 [Pipeline] sh 00:31:03.391 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:31:03.391 1434595 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:31:03.402 [Pipeline] sh 00:31:03.683 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:31:03.683 ++ grep -v 'sudo pgrep' 00:31:03.683 ++ awk '{print $1}' 00:31:03.683 + sudo kill -9 00:31:03.683 + true 00:31:03.695 [Pipeline] sh 00:31:03.981 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:13.989 [Pipeline] sh 00:31:14.273 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:14.273 Artifacts sizes are good 00:31:14.288 [Pipeline] archiveArtifacts 00:31:14.296 Archiving artifacts 00:31:14.485 [Pipeline] sh 00:31:14.820 + sudo chown -R sys_sgci /var/jenkins/workspace/dsa-phy-autotest 00:31:14.836 [Pipeline] cleanWs 00:31:14.847 [WS-CLEANUP] Deleting project workspace... 00:31:14.848 [WS-CLEANUP] Deferred wipeout is used... 00:31:14.854 [WS-CLEANUP] done 00:31:14.856 [Pipeline] } 00:31:14.877 [Pipeline] // catchError 00:31:14.889 [Pipeline] sh 00:31:15.173 + logger -p user.info -t JENKINS-CI 00:31:15.183 [Pipeline] } 00:31:15.199 [Pipeline] // stage 00:31:15.204 [Pipeline] } 00:31:15.221 [Pipeline] // node 00:31:15.226 [Pipeline] End of Pipeline 00:31:15.260 Finished: SUCCESS